Category: Uncategorized

Strangler Fig Pattern to Move from Mono to Microservices

You’ve been given the green light to modernize a legacy system; however, the monolithic application provides core functionality to the entire organization. It has to remain operational during the process. You could leave the existing code in production while developing a microservices-based architecture, but you don’t have the resources to modernize and maintain the old application.

If your modernization projects require continuous operation, then applying the Strangler Fig pattern may be the migration strategy to use. It minimizes the impact on production systems and reduces disruption to the entire organization. So, how does the Strangler Fig pattern facilitate the modernization of legacy systems to microservices?

What is the Strangler Fig pattern?

The strangler fig pattern takes its name from the strangler fig tree. The tree begins as a seed in the host tree’s branches. The seeds produce downward-growing branches until they establish roots in the soil surrounding the host tree. Eventually, the strangler fig replaces the host tree.

The Strangler Fig pattern in modernization operates on the same principle  — replace legacy code with microservices gradually until the old code is replaced. Using the Strangler Fig pattern begins with a facade interface. The interface serves as a bridge between the legacy system and emerging microservice code.

What is a Facade?

The facade interface operates between the client side and the back-end code. When the facade sees a client request, it routes the traffic to the legacy system or a microservice. The facade is removed, and the client communicates directly with the microservices. As microservices come online, the facade sends more traffic to the modernized system until the legacy system no longer exists.  

As developers create microservices to replace existing functionality, they can test individual services, minimizing the risk of operational failure. If a problem arises, programmers can address it quickly as they only work with a single microservice. The facade can continue to route requests to the legacy system until the code can be safely released into production.

However, the efficacy of the Strangler Fig pattern depends on understanding the complexity of the existing code and the resilience of the Strangler Fig pattern facade. 

Understanding Code Complexity

Although the Strangler Fig pattern seems like the perfect solution for modernizing code with minimal risk, its success or failure depends on identifying the functions that should be turned into microservices. It means sorting through lines of code to isolate individual functionality. If the codebase is small, the Strangler Fig pattern adds a layer of complexity that is not required.

However, organizations working with millions of lines of code can use the pattern to segment migration and minimize risk. Identifying and managing patterns contributing to code complexity can simplify the modernization process.

Untangle Spaghetti Code

Spaghetti code refers to legacy applications that lack structure. Without a logical construct for the application, developers struggle to understand how the code flows. Fixing spaghetti code often relies on guesswork, leading to miscalculations and operational disruptions.

Remove Dead Code

Dead code refers to code that runs but appears to have no impact on the application’s behavior. Unreachable code, like dead code, complicates the code base. It exists but never executes. Both coding patterns complicate program logic and increase the likelihood of a dependency being missed.

Avoid Code Proliferation

Programmatic intermediaries can help new applications talk to legacy systems, but objects that exist to call other objects increase the codebase without adding value. In most instances, the middle object can be removed.

Why Facades Must be Resilient and Secure

The facade is what keeps a Strangler Fig pattern functioning. It ensures that incoming traffic is routed to the appropriate back end. If it fails, all or part of the production system could fail. If the legacy system is a critical-path application, resiliency must be designed into the facade.  

Design for Resilience

Resilience should be designed into the facade and include capabilities to help with processing surges from batch updates. Legacy systems often use batch updating for maintaining core information. When those files are sent record by record without throttling, systems can be overwhelmed. Designing solutions that operate in separate environments can reduce cascading failures. Resilient architectures can minimize possible failures during migration.

Build in Security

With high traffic volumes, facades can be vulnerable to cyberattacks. Zero trust architecture can address server-to-server vulnerabilities. APIs expose components of a monolithic system to outside sources that lack strong security. When converting to microservices, protection from external attacks cannot be assumed. Security considerations should be included in any modernization strategy.

Related: Strangler Architecture Pattern for Modernization

How to Use the Strangler Fig Pattern with Microservices

Strangler Fig patterns let developers update code incrementally. There’s no need to shut down the legacy system and risk a failover if the code doesn’t work as planned. Instead, the software is refactored, and the legacy system functions are gradually cut off. The iterative process allows development teams to focus on refactoring a service at a time. It eliminates the need for multiple teams to maintain two systems.

Down-size God Classes

A single class that grows to encompass multiple classes makes moving to microservices frightening for even the best developers. The god class and its thousands of lines of code reside across methods, making them available to the entire code base. Moving or deleting from the god class can have unexpected outcomes because of the difficulty in identifying interdependencies.

With the Strangler Fig pattern, place variables in object-based data structures. Store the god-class code in an object that links to the appropriate structure. Use the data structure in the microservice and reflect the change in the legacy code. As modernization progresses, well-structured code replaces god classes until they no longer exist.

Replace Hard-Coded Statements

Hard-coded statements should be replaced with dynamic services. Java-based applications often use hard-coded SQL statements. These statements inhibit code agility. When creating microservices with the Strangler Fig pattern, these statements can be replaced or removed incrementally until all hard-coded statements are removed. The logic in the legacy system can be disabled, leaving dynamic microservices.

Ensure Data Integrity

Most databases use triggers that execute code in response to events. For example, a financial transaction is sent for authorization, and its corresponding data is placed in a database. A reversal of that transaction is received, which triggers the code to revise the transaction status field. To ensure data integrity, design the new system to capture the data from the legacy system. Eventually, the new database will contain the most recent information. Older data can be purged or archived, depending on the data storage requirements.

Why Modernization Needs Continuous Monitoring

Modernization requires continuous monitoring. For example, checking the security designed into each microservice can ensure a robust security posture when the modernization is complete. Here are three areas to act on when moving to microservices.

Create Seamless Communication

Microservices should communicate seamlessly, whether it’s through APIs or other messaging services. Message gateways should handle routing, request filtering, and rate limiting. Including a mechanism to allow retries if a request fails adds resiliency to microservice implementations. Internal communications can be monitored using existing tools. Service mesh technology can also monitor internal communications; however, implementing a service mesh at the start of a modernization process is not recommended as it adds to the project’s complexity.

Build-in Rollback Capabilities

While the Strangler Fig pattern for microservices can minimize the odds of a catastrophic failure, that doesn’t mean that every service will work. Ensure there are built-in mechanisms that automatically roll back to the last functioning state. Each microservice should report its operational health to ensure operational integrity. 

Eliminate Cascading Service Failures

When a microservice fails, the failure should not impact the rest of the application. A circuit breaker pattern can help with service failures. Circuit breaker patterns can act as a fail-safe to prevent cascading failures. If a service fails, the breaker invokes a retry mechanism. It will retry requests a preset number of times or retry the connection periodically until communication is restored.

Automating Modernization

Assessing a modernization effort takes hours of combing through code. It means evaluating code complexities such as god classes, unreachable or dead code, and code proliferation. Planning involves deciding on microservice granularity. Too many services generate overhead and add complexity. Too few can reduce agility and hamper independent operations.

Automating assessments using AI-based platforms can save hours of labor and provide a more accurate result. Static and dynamic analyses evaluate the existing codebase for the following:

  • Technical debt
  • Interdependencies
  • Domain boundaries
  • Code complexity
  • Risk

Through the analyses, automated solutions can quantify the effort needed to refactor an application. With the results, development teams can identify a starting point for modernization. 

Related: Four Advantages of Refactoring that Java Architects Love

Reduce Risk and Increase Success with Automated Refactoring

With the right automated tools, monolithic applications can be modernized and deployed quickly. vFunction’s Code Copy can identify dependencies, divide services by domain, and introduce newer frameworks. It’s a multidimensional analytical approach that tracks code behaviors, call stacks, and database usage.

Using an automated refactoring platform, organizations can quickly convert monolithic code to microservices that fit within a Strangler Fig pattern approach for microservice modernization. They can help identify migration sequences and determine the scope of each microservice. Automated tools can even flag legacy services that should not be turned into microservices. 

Before starting a green-lighted modernization project, contact vFunction to request a demo and see how automation supports the Strangler Fig pattern.

How Opportunity Costs Can Reshape Measuring Technical Debt

As a Chief Information Officer (CIO) or Chief Technology Officer (CTO), you and your team may have spent weeks, if not months, every year measuring the technical debt of your legacy applications and infrastructure. You’ve examined aging frameworks, software defect patterns, code quality, release frequencies, and technical debt ratios. You’ve presented the data to other executives. You’ve even explained the modernization process to the company’s Board. When it comes time to approve the process, everyone hesitates. 

The CEO understands that maintenance costs will increase the older the technology becomes. The Board knows that legacy-system programmers are hard to find. They realize that the old technology will eventually reach its end of life if it hasn’t already. The decision-makers weigh that information against the cost of modernization, the potential operational disruption, and the lost productivity as the migration occurs — and decide to wait.

Sound familiar? Failing to receive approval can be disheartening after investing significant resources in trying to measure technical debt. However, the effort isn’t a total loss. The data can be used to demonstrate the opportunity costs of inaction. After all, business leaders understand the concept of lost opportunities.

Business executives know that reducing financial debt frees funds for investing in new markets or launching new products. Paying off technical debt is no different. With less technical debt, organizations will have the agility to take advantage of future opportunities. They can pivot quickly when unexpected events change the economic landscape. Unfortunately, developers rarely make technical decisions or justify modernization in terms of opportunity costs.

What Are Opportunity Costs When Measuring Technical Debt?

Opportunity cost in economics is the value of the next-best alternative when a decision is made. It represents what is lost when one option is chosen over another. People make either/or decisions every day, most without thinking of the opportunity costs. 

For example, you spend $10.00 on a cup of coffee on your way to work (even if the walk is from one room in your house to another). The explicit opportunity cost is what else you could have purchased with the $10.00. But opportunity costs have an implicit cost as well.

Suppose you could use the money to buy ice cream for yourself and your child. The experience of buying and eating the ice cream together strengthens your relationship. How do you place a price on the experience? Quantifying implicit costs is difficult, if not impossible. However, it can be an essential intangible that can guide a decision.

When developers do not consider opportunity costs, they make decisions that often lead to technical debt that prevents organizations from achieving their business goals. Let’s look at how opportunity costs become technical debt.

How Opportunity Costs Become Technical Debt

Technical debt is the opportunity cost of a prior decision. Most development projects start with three variables — time, cost, and quality. The shorter the timeline, the higher the cost and the lower the quality. Limited resources (costs) can impact the quality and timeliness of the deliverable. Higher quality usually requires more time and money. 

When choosing a variable, most project managers or developers know which variable to select, given the circumstances. If the software delivery is late, the option is coding a solution that doesn’t lengthen the timeline. What doesn’t happen is assessing the opportunity costs of what wasn’t selected. Those neglected opportunity costs can turn into technical debt.

Let’s assume that a legacy system has a series of configuration files except for one module. That module has the data in a table. No one knows why, but they assume other priorities got in the way. Years later, the table needs to be addressed because new data needs to be added. Turning the table into a configuration file is the explicit cost that wasn’t calculated when the decision was made to leave the table alone. 

Using Opportunity Costs to Reshape the Technical Debt Discussion

Whether modernizing legacy systems or reducing technical debt, the goal is to replace or remove code that inhibits an organization’s ability to achieve business goals. As part of the process, it’s assumed that past methods would be revised to create a system that minimizes technical debt. 

Modernization does not always lead to reduced technical debt. According to McKinsey, 20% to 40% of a company’s technology landscape is absorbed by technical debt. IT departments discuss agile development methods but fail to implement practices to minimize debt. They rush to meet sprint deadlines and opt for solutions that increase technical debt. If the debt is not addressed in a later iteration, it continues to grow.

Calculating Technical Debt

The first step in determining opportunity costs is calculating the cost to remove the technical debt. Several methods exist for calculating technical debt, including the following:

  • Code Quality. Look at lines of code, nesting depth, cognitive complexity, maintainability, and similar metrics to measure technical debt. If quality metrics begin to slip, the technical debt will increase.
  • Defect ratios. Compare the number of new defects against fixed defects. A high ratio indicates a growing technical debt, while a small ratio indicates little debt.
  • Reworking. Stable code should require minimal upkeep. Tracking which modules or code segments are being reworked is one way to assess technical debt. If code segments require repeated reworking, the code may be contributing to technical debt.
  • Completion time. Low-priority fixes should not consume significant resources. When developers take longer than expected to address a defect, the code may increase technical debt. Tracking time to complete can identify possible errors of technical debt.
  • Technical Debt Ratio. Calculate the cost of addressing technical debt by comparing what it costs to fix a problem versus rewriting it.
  • Automated Tools. AI-based tools can help identify and quantify technical debt. Using algorithms, the tools can provide an objective assessment of technical debt.

Because measuring technical debt can be time-consuming, AI-based tools that learn as they analyze legacy code can streamline the process. With a less labor-intensive approach, IT departments can spend more time evaluating opportunity costs without sacrificing the detailed analysis of technical debt.

Related: Evolving Toward Modern-Day Goals with Continuous Modernization

Determining Opportunity Costs

Let’s assume that the technical debt for a transaction processing module is $1 million. The module is a core component of the back office that most people view as having minimal impact on customer-facing improvements. When assessing the pros and cons of the modernization project, cost reduction seems to be the primary reason for approval.

Rather than focus on lowering costs when asking for approval, focus on opportunity costs if no action is taken.

Let’s use the transaction processing module. The existing code lacks flexibility, and adding a new transaction type would require rewriting the module. Now let’s assume peer-to-peer transfers will be a new transaction type within two years.

The government may begin regulating peer-to-peer (P2P) payments, which the existing system cannot support. Recent research indicates that 84% of the population has used P2P transfers and that about half of those use the service at least once weekly. Given the US adult population was almost 260 million in 2020, a potential market share of even 5% would equal 13 million people. Assuming 13 million people follow the research, 5.5 million would use a P2P transfer weekly. If the transaction fee were $0.05 per transaction, the lost transaction revenue  — i.e. the opportunity costs — for one year would be almost $15 million.

Suddenly, the discussion isn’t about how much modernizing the module will cost but how much revenue would be lost if it isn’t.

Communicating the Opportunity Costs of Technical Debt

Customers and competitors have forced companies to modernize applications that impact customer experience. Organizations have spent millions on digital transformation without touching the core systems at the heart of their infrastructure. If the legacy systems are working, why risk breaking them?

Executives remember the chaos of past system upgrades or replacements. They hesitate to touch core systems because they fear a repeat experience. They do not see the constraints a legacy system places on their ability to pivot quickly, gain data-based insights, deliver better customer experiences, and ensure sustainability.

It’s Not Just a Technical Problem

Removing technical debt is a business problem. Yet, most businesses view it as technical. IT must change the perception if they want approval to modernize core systems. They must still conduct their due diligence to quantify technical debt and the cost to rewrite, remove, or refactor code. But they must present the information in business terms.

Using opportunity costs as a framework for presenting a business case reshapes the discussion. This effort requires a collaborative approach using subject matter experts (SMEs) who can help identify possible opportunity costs. In most cases, the SMEs know the system limitations but lack the technical knowledge to quantify the scope and cost of the effort.

Together, cross-functional teams can prepare business cases that illustrate the need for modernization beyond cost reduction. They can communicate solutions through opportunity costs that resonate with company executives. Combining resources makes reshaping the discussion possible and the chance of approval much higher.

Use the Right Tools to Start Modernizing the Smart Way

vFunction offers AI-based tools to assess the technical debt of Java and .NET applications. Using vFunction’s Assessment and Modernization Hubs, IT executives can provide a comprehensive analysis of technical debt that forms the basis for an opportunity cost assessment. Contact us to learn how our solution can help reshape your technical debt discussions.

Evolving Toward Modern-Day Goals with Continuous Modernization

Part 4 in the Uncovering Technical Debt Series from Intellyx, for vFunction. [Check out part 1 | 2 | 3 here.]

We’ve dug deep into our technology stacks, uncovering all of the legacy artifacts and monoliths that we could find from past incarnations of our organization. 

We’ve cataloged them, rebuilt them to modern coding standards, and decoupled their functionality into object-oriented, service-enabled, API-addressable microservices.

Now what? Are we modernized yet? 

Well, mostly. There are always some systems that just aren’t worth the time and attention to replace right now, even with intelligent automation and refactoring solutions.

Plus, we acquired one of our partner companies last year, and we haven’t had a chance to merge their catalog with our ordering system yet, so they are still sending us EDI dumps and faxes for urgent customer requests…

We’re never really done with continuous modernization

We’ve compared legacy modernization to the discipline of archaeology. But what happens once archaeologists finish their excavation and classification expeditions? Anthropologists can take over the work from here, interpreting societal trends and impacts even as the current culture continues to evolve and generate new artifacts. 

Similarly, discovering and eliminating uncovered technical debt isn’t a one-time modernization project, it’s a continuous expedition of reevaluation. Once an application is refactored, replatformed or rearchitected, it creates concentric ripples, exposing more dependencies and instances of technical debt across the extended ecosystem, including adjacent applications within the organization, third-party services and partner systems.

Mapping the as-is and to-be state of the codebase with discovery and assessment tools is useful for prioritizing the development teams’ targets for each project phase around business value, but business priorities will change along with the application suite. 

Development teams also get great utility from conducting modernization projects with the help of AI-driven code assessment and refactoring solutions like vFunction Code Copy, but they can realize even greater benefits by retaining the history of what worked (and didn’t work) to inform future transformations.

Not every modernization project works out equally well, but when the hard lessons of modernization feed back into the next assessment phase, this virtuous cycle can become part of the muscle memory of the organization, allowing mental energy to be spent on the most important choices that affect the long-term goals of the business.

Putting technical debt to rest: what to expect

No computer science college student or self-taught coder sets out to spend a career finding and fixing bugs in their code, much less someone else’s – but on average, developers spend at least 30 to 50 percent of their time on rework, rather than innovation and enablement of new features that are perceived to add business value.

Besides the perceived thanklessness of the effort, developers encounter morale-destroying toil when sifting through legacy code, which usually contains lots of class redundancies, recursive methods, poor documentation and a general lack of traceability, resulting in slow progress.

Continuous modernization offers a way out of this thankless job, by preventing technical debt from collecting during each assessment and refactoring phase. 

Here’s some levers teams are pulling for successful long-term improvements:

  • Continuous assessment. The best performing initiatives are not just conducting initial assessments, they are continuously mapping, measuring and observing modernization efforts before, during, and after each refactoring run.
  • FinOps practices basically bring financial concerns and tradeoffs to each modernization selection process or option. IT buying executives have been doing ROI analyses for vendor selection and capex computing investments for years. Now, savvy buyers are getting better cost justification for money spent on modernization, with real financial metrics for resources, employee and customer retention, and delivered customer value. 
  • SLO objectives offer positive motivation for time-and-labor savings and incremental delivery of new services, in comparison to the negative contractual penalties enforced through SLA failures. Developers are incentivized to meet goals such as faster refactoring projects, faster automated deployments, and higher value updates – with fewer hitches and developer rework required.
  • Qualitative business goals are equally as important to success. Better team morale improves productivity and employee retention rates, versus trying to replace high-quality people with new ones that could take months to get up to speed. Developers love working for agile enterprises, where they can test theories and ultimately help the application suite evolve faster in the future to meet changing customer needs.

Trending toward velocity and morale at Trend Micro

Trend Micro is considered a global leader in cloud workload security, with several successful products underneath the banner of its platform – but that didn’t mean their modernization journey started without major headaches. 

Much of their existing product suite, with more than 2 million lines of code and 10,000 independent Java classes, was built before secure API connections between cloud infrastructure and microservices were fully sussed out by the development market. Therefore, earlier customers were more inclined to trust on-premises installations and updates of vital virus, spam and spyware prevention software.

As the modern trends of SaaS-based vendors and cloud-based enterprise applications really hit stride over the last decade, Trend Micro started offering a re-hosted version of its suite under their CloudOne™ Platform banner.

Their initial lift-and-shift of one module’s code and data store to AWS offered some scalability and cost benefits due to elastic compute resources, but as the user base grew, it was becoming harder and harder for product dev teams to get a handle on inter-product dependencies that hindered future releases and updates to meet customer needs. Morale suffered as the replatforming took about a year.

Trend Micro turned to vFunction to identify and prioritize modernization of their most critical “Heartbeat” integration service – with more than 4000 Java classes that take in data from sensors, event feeds and data services across the product suite.

Then using vFunction for modernization, the team was able to visually understand code complexity, with an applied AI for identifying essential, interconnected and circular dependencies, deprecating dead code that would no longer add any actual value for customers going forward.

Through refactoring, they were able to decide which classes should be included as part of the new Heartbeat service, and which should be kept in a common library for shared use across other product modules in the future.

This modernization project took less than 3 months – a 4X speed improvement over the previous project, with successive update deployment times decreased by 90%. Best of all, morale on the team has improved by leaps and bounds.

The Intellyx Take

Continuous modernization offers enterprises a lasting bridge from the monolithic past to a microservices future, but with constant change at enterprise scale, the journeys across this bridge will never really end.

To get to the bottom of the biggest obstacles of modernizing our digital estates, we must first assess and prioritize code refactoring and application architecture efforts around resolving technical debt.

Then, our intrepid teams can venture forth, digging to unearth the artifacts and digital foundations of our organizations, transforming our applications into modular cloud native services, resetting the values of our shared culture, and adapting our architectures to meet the challenges of a global, distributed, hybrid IT future.

Can you dig?

©2022 Intellyx LLC. Intellyx retains editorial control of this document. At the time of writing, vFunction is an Intellyx customer. Image credit: Licensed from Alamy “2001: A Space Odyssey” movie still.

The Cost of Technical Debt and What It Can Mean for Your Business

A report published by OutSystems revealed that 69% of IT leaders found that technical debt fundamentally limits their ability to innovate. In addition, 61% stated that it negatively affects their organization’s performance, and 64% believe that technical debt will continue to have a substantial impact in the future. The report further explained that, in some cases, organizations may benefit more from investing in decreasing technical debt rather than innovation.

But what do you do when 80% of each dollar spent on your application budget goes to keeping the lights on? Certainly, maintenance, firefighting, and root cause analysis don’t usually fall into the “innovation” category. What kinds of technical debt contributes to a draw on company resources, and what can be done about it?

The History Behind the Term Technical Debt

Ward Cunningham, one of the Manifesto for Agile Software Development coauthors and the developer of the first wiki, once suggested that some issues in code resemble financial debt. The analogy is that it’s okay to borrow against your future if you’re ready and willing to pay that debt when it’s due.

The term “technical debt” sounds more like a financial term than a programming theory for a reason. While developing a financial application in the Smalltalk-80 language, Cunningham used the financial analogy to justify refactoring to his boss. 

Some people continue to dispute the precise connotation Cunningham intended to convey: Is it okay to incur technical debt or best not to? At its core, Cunningham’s story identifies a crucial problem faced by many software teams. 

Specific critical tasks such as refactoring and making improvements to the code and its architecture are delayed to meet deliverables and deadlines. However, these coding tasks must be completed eventually, resulting in cumulative technical debt. Before development teams know it, they’re overworked and burned out from trying to pay off all the technical debt they’ve incurred over time.

The True Cost of Technical Debt

Have you ever truly contemplated the cost of technical debt? By now, you likely realize that your technical debt is costing your team more over the long term than the perceived benefits of delaying paying off the debt. However, since technical debt isn’t as tangible as monetary debt, you might wonder how and where one begins to estimate its cost. 

To understand why technical debt is so easy to ignore, one must understand why people pay tens of thousands of dollars in interest throughout their life rather than saving and paying cash. 

CNBC Select looked at the amount of interest the average American paid on loans taken out throughout their lifetime. Its report found that when including a student loan, a used car payment, a single credit card balance, and a mortgage on a median-priced home, they paid on average $164,000 over their lifetime.

With so much hard-earned money going towards paying off interest, one would think that more people would resist taking financial “shortcuts.” But similarly to consumers who accept paying thousands of dollars in interest as a reasonable shortcut, IT professionals often disregard technical debt for an end goal that seems like a good idea — until they’re forced to repay it in interest. 

For many borrowers, being able to possess that item now is more important than what they could gain over time by saving and paying cash. This is analogous to the way many IT professionals feel when they focus on getting software into production sooner without fully considering the technical debt accruing. Rather than methodically ensuring all past issues are resolved, they move forward to complete a new goal, leaving problems in their wake that, sooner or later, must be addressed.

An application’s budget for maintaining accumulated technical debt could be upwards of 87%. That leaves only 13% of its budget going towards innovation. Furthermore, aging frameworks represent a security risk while they contribute to the cost of technical debt. That makes keeping the lights on even harder.

Related: How Dynamic Logging Saves Strain on Developers and Your Wallet

The Many Faces of Technical Debt

Classifying technical debt will by no means suddenly make it simple and easy to handle. But the classification process focuses development teams and enables them to have more productive conversations. 

Technical debt will always be a significant part of DevOps and how to effectively manage this debt should be regularly taught to both students pursuing a career in the field and those with years of experience. It’s also essential to constantly evaluate where and how technical debt is hindering your team. 

Once you’ve identified these factors, it will be easier to increase overall productivity and deliver features in a timely fashion. While there are many types of technical debt, below are four examples of the types of technical debt many developers will encounter in their work.

1. Unavoidable Technical Debt

Organizational change and technological advancement are the primary sources of unavoidable technical debt. It usually occurs when scope modifications are solicited mid-project, followed by the immediate cost of those modifications. 

An example could be adding a new feature to a legacy system to better support mobile systems. In other words, unavoidable technical debt is inevitably generated as new organizational requirements or advances in technology cause a systems code to become obsolete.

2. Software Entropy or Bit-Rot

Software entropy or bit-rot happens over the course of an application’s lifespan as the quality of software gradually degenerates. It eventually leads to usability issues such as errors or necessitates frequent updates. 

Software entropy also occurs if numerous developers make cumulative modifications that increase a software’s complexity over time. Their changes may also slowly damage the code or violate non-functional requirements such as accessibility, data integrity, security (cyber and physical), privacy (compliance with privacy laws), and a long list of others. Refactoring is generally the solution for software entropy.

3. Poor-Quality Software Code

Agile software development depends on diminishing the scope of a release to guarantee high-quality code rather than prioritizing speed or release. By doing the latter, technical debt is passively generated when the scrum team discovers and tries to solve an issue. The number of times this process is repeated causes the cost of technical debt to increase, resulting in decreased efficiency and productivity as development teams repay their analogical debts.

Technical debt comes in the form of unnecessarily complicated, unreliable, or otherwise convoluted code. No code is perfect, but when it’s saddled with excessive technical debt, it can become a bigger problem than the issue it was designed to resolve. 

The more technical debt found in a piece of code, the farther from the intended goal it becomes; The farther from the intended goal it becomes, the longer it will take to iron out the kinks.

Extemporaneous or absent developer onboarding and training, as well as insufficient coding criteria for developers, contribute to poor-quality software code as well. Additionally, having to rewrite outsourced code or poor scheduling adds extra stress to an already demanding job. These examples tend to increase the cost of technical debt exponentially compared to other instances and are common contributors to developer burnout.

4. Poor IT Leadership 

Poor IT leadership is another major contributor to the cost of technical debt, as well as many of the consequences mentioned before. It materializes in various ways, with many IT managers either unaware or in denial of the problem. 

Micromanagement is a perfect example of a leadership style that contributes to varying degrees of technical debt. While it usually works great for small-scale projects, micromanagement causes leaders to develop tunnel vision. Before long, they’ve lost sight of the bigger picture and begun to rub their team the wrong way. All sorts of complications arise from these types of toxic environments. The resulting technical debt only compounds matters.

IT managers contribute to technical debt by not listening, considering, or implementing feedback from their team or scheduling sufficient time in each release to address historical debt issues. By ignoring others merely because they view them as subordinates, errors are overlooked. 

In addition to that, cloud and containerization trends evolve at a rapid pace, often bypassing the understanding of both end users and IT management teams. Nevertheless, some organizations don’t want to risk appearing unknowledgeable and thus make poor decisions or adopt unnecessary tools that complicate things.

Related: Monoliths to Microservices: 4 Modernization Best Practices

How Technical Debt Affects the Customer Experience

It’s important to remember that technical debt isn’t merely about short-term and long-term deficits. Depending on how much debt was left in an application, system performance can be drastically affected. When it’s time to scale up or add new features to a debt-laden IT infrastructure, the customer experience suffers. 

For example, according to CNBC, online spending on Black Friday increased by nearly 22% in 2020 due to the COVID-19 pandemic restrictions. Many brick-and-mortar retailers scrambled to establish a competitive online presence. As a result, technical-debt-related issues such as website lagging, outages, and glitches plagued even major retailers.

Imagine the embarrassment of receiving complaints from customers about items vanishing from their carts at checkout. Worse yet, imagine losing tens of thousands of dollars to competitors during a lengthy crash.

The COVID-19 pandemic has forever changed how consumers interact with organizations. More crucially, the dynamic has shifted dramatically, with consumers having the power of countless choices. 

Along with that, digital technology makes it easier for consumers to express their concerns about poor service experiences, posing more challenges than ever before but increasing awareness of a company’s weak areas.

According to The Wall Street Journal, “Some 66% of consumers surveyed in 2020 said they had experienced a problem with a product or service, up from 56% in 2017, when the survey was last conducted. Further, Gartner posits that an “effortless experience” is the key to loyal customers.

If your customers make it to the customer service stage to express their complaints, regardless of whether it’s with a product or an inefficient, glitchy platform, you may be too late. If they don’t, you may be a step ahead of your competition. Ultimately, the cost of technical debt can be much greater than you think addressing it should be a top priority for your company.

Don’t Allow Technical Debt To Drag Your Team Down

Senior developers find it difficult to illustrate the overall impact technical debt has on an organization’s bottom line to nontechnical executives and investors. Unlike financial debt, technical debt is a lot less visible, allowing people to disregard its impact more easily. When determining whether technical debt is worth it or not, context matters.

By employing the vFunction Platform, AI and data-driven analysis facilitates, accelerates, and focuses most manual efforts and does much of the heavy lifting. This saves architects and development teams from spending thousands of hours manually examining diminutive fragments of code and struggling to identify and let alone extract domain-specific microservices. Instead, they can now focus on refining a reference architecture based on a proper “bird’s eye” view. Contact us for a demo today and start properly managing your company’s technical debt.

Creating a Technical Debt Roadmap for Modernization

Every company should have a little technical debt, but keeping it below the recommended 5% can be challenging. If companies aren’t careful, the debt can grow to 10%, 50%, or even 80-90% in extreme cases. According to McKinsey, the average technical debt is between 20% and 40%. Repaying that quantity of debt takes planning, vigilance, and continuous attention. It takes building and executing on a technical debt roadmap to balance the needs of the present while paying down the technical debt from the past.

Organizations with financial debt understand debt and its effects. They evaluate different repayment options. They crunch numbers and perform analyses until they’ve created a repayment plan that doesn’t constrict growth but does lower the debt. Executives understand that debt can hamper innovation and growth. With less money to invest in new product development, businesses lose their edge over competitors with less debt. 

Despite their understanding of financial debt, most companies fail to apply the same principles to technical debt. There’s little planning, and decisions are rarely based on data. As a result, enterprises invest in applications that don’t lower technical debt. They devote resources to solutions that are reaching their end of life. Executives struggle to find a place to start.

Without a Technical Debt Roadmap

The 2022 McKinsey study mentioned above included 220 organizations across different business sectors, and it found that the percentage of technical debt a company has correlates with business performance. Of the participating companies, those with the lowest technical debt ratio experienced 20% higher revenue growth than those with the highest debt ratio. 

The study calculated technical debt via a “Technical Debt Score” (TDS) value, and the research found that businesses in the bottom 20% of technical debt, with the poorest TDS, were 40% more likely to cancel or fail to complete modernization efforts.

The top performers spent, on average, 50% more on modernization than those in the lowest percentiles. As they paid down their debt, they remained disciplined in how and where they spent their technology dollars. These companies learned through the process how to use technology to drive innovation and increase revenue.

Related: Eliminating Technical Debt: Where to Start?

Originally, technical debt referred to the consequences of software developers placing delivery deadlines over technical, architectural, or design considerations. It’s what happens when shortcuts in code quality are taken to meet customer requirements. Today’s technical debt has expanded to include any decision that impacts a company’s technology stack.

Determining the size of technical debt is the first step in creating a roadmap. The process should encompass such factors as:

Some of these factors are easier to assess, while others require more intensive analysis, such as code quality. However, all factors should be evaluated to ensure a solid technical debt roadmap.

Defect Ratios

No application is perfect; however, the older the software, the fewer the defects. When the reverse happens, the defect ratio increases. The result is a growing technical debt. If left unchecked, the software may reach a point where it is irreparable. The legacy solution can no longer operate in a modernized construct.

Completion Time

When an engineer or developer is assigned trouble or support tickets, how long does it take to complete them? Focusing on low-priority tickets can identify a growing technical debt. For example, an incorrect value appears in a report. Because the legacy code uses tables instead of databases, the developer has to determine how the software produces the value used in the report. In a monolithic architecture, tracing the value could require combing through hundreds of lines of code. If the value is calculated, the time to resolve increases.

Rework

Reworking the same code segment indicates a technical debt. An employee opens a support ticket for a legacy utility. The assigned developer sees it’s an easy fix and completes it in less than an hour. A week later, another developer accesses the same module to fix a different support ticket. This correction takes a little longer and requires reworking the first fix. When programmers are making fixes to fixes, it’s an indication that technical debt is accruing. 

Code Quality

Code quality requires more in-depth analysis than higher-level assessments, such as total defects or rework statistics. Quality code in relation to technical debt encompasses lines of code, code complexity, inheritance, maintainability, nesting, and couplings. Assessing quality may require tools to look at specific parameters to identify coding flaws.

Architectural Debt

Academic research that started in 2012 with “In Search of a Metric for Managing Architectural Technical Debt”, authors Robert L. Nord, Ipek Ozkaya, Philippe Kruchten and Marco Gonzalez-Rojas created a metric to measure architectural technical debt based on dependencies between architectural elements. They use this method to show how an organization should plan development cycles and roadmap investments that take into account the effect that accumulating technical debt will have on the overall resources required for each subsequent version released. This breakthrough study recently received the “Most Influential Paper” award at the 19th IEEE International Conference on Software Architecture.

Assessing Code Quality

When evaluating legacy code, poor quality doesn’t mean poor programming. It means evaluating legacy systems in terms of today’s coding standards. For example, older architectures created one large monolithic application consisting of thousands of lines of code. The software was designed to run on a single on-premise server. Today’s architecture breaks that large application into smaller microservices better suited to a cloud environment. 

Assessing code quality provides data for resolving technical debt. It uses the following metrics to determine the status of individual applications that can be used to create a technical debt roadmap.

Risk Index

Code dependencies are the bane of modernization efforts. Depending on how long the legacy system has existed, and the number of programmers who worked on it, finding dependencies is like looking for Waldo. They are difficult to find. They may be buried among lines of code, but failing to address them beforehand can become a career-changing move.

Paying back a technical debt should not result in an unexpected application shutdown. With the right tools, organizations can identify dependencies to be evaluated before changes are made, reducing the risk of an epic fail. 

Complexity Index

Think of the complexity index as strings of lights. Is there anything more frustrating than trying to untangle holiday lights? A complexity index identifies how entangled class dependencies are. Like light strings, a few dependencies can be untangled and put to use. Too many dependencies may make it too costly to isolate into microservices. Knowing that upfront makes it easier to assess whether to retain or replace a legacy solution.

Debt Index

 A debt index provides an overall assessment of an application’s technical debt. It combines the risk and complexity indices and compares the results with other applications. Sorting the debt index from high to low can be the start of a technical debt roadmap.

Accepting the debt index without evaluating the complexity and risk values can skew the roadmap. Although complex entanglements often correlate with high risk, organizations must look at the details. After all, quality code is always about the details.

Frameworks

As technologies advance, so do the frameworks. What was considered the leading edge two years ago has become a standard that everyone uses. Frameworks that existed for decades may no longer run on supported operating systems. Dependencies may tie to third-party solutions that do not exist. These aging frameworks pose security risks.

A recent example of security risks in existing frameworks is the Log4j vulnerability discovered in December 2021. This flaw was a zero-day vulnerability to Apache’s logging software. Although Apache had released later versions, many organizations retained the older version for compatibility with existing architectures.

Understanding the weaknesses in older frameworks should be part of every technical debt roadmap. If vulnerabilities can be patched, an older framework may place lower on the modernization list than a newer framework with no available security patches.

Future Proof

Technical debt happens every day. Companies make decisions to make a quick fix to get a business-critical solution back in operation as quickly as possible. Maybe the delivery date for fixing the code doesn’t allow for the necessary rework. A patch is delivered instead, and technical debt increases.

Modernizing existing architecture also requires that solutions are compatible with the latest compilers, libraries, and frameworks. Staying as current as possible reduces ongoing technical debt. With less time spent on lowering debt, businesses can devote more resources to innovation and growth.

Related: Go-to Guide to Refactoring a Monolith to Microservices

Creating a Modernization Roadmap

Part of creating a technical debt roadmap is deciding how best to address the modernization. Options may include refactoring, re-platforming, and rearchitecting. Each approach may be part of an organization’s plan to lower technical debt.

Refactoring turns messy code into clean code. Clean code has fewer complexities, eliminates duplication, and makes for easier maintenance. Messy code can take longer to compile or throw errors that are corrected multiple times by different programmers. With large projects and multiple programmers, it’s easy to lose control over the code. Refactoring cleans code so it can run faster and improve performance.

Replatforming

Replatforming adds functionality to take advantage of cloud infrastructures. It doesn’t modify the application. It can improve an application’s ability to scale and expedite interactions with cloud-based data stores. It can be a cost-effective way to leverage cloud functionality without the cost of replacing or rearchitecting code.

Rearchitecting

Designing an application to operate in the cloud means rearchitecting the application. Developers and engineers are starting from the ground up when it comes to redesigning an existing solution to operate as a cloud-native application. While building an application may be labor-intensive, it may be the only solution to modernizing an existing solution.

Managing Technical Debt

The McKinsey article referred to technical debt as dark matter. It exists. Its impact is measurable, but it can’t be seen or measured. At vFunction, we politely disagree. Technical debt is quantifiable through automated tools that leverage advanced technologies, such as artificial intelligence, to move monolithic structures to microservices for cloud-native deployment.

If you’re interested in creating a technical debt roadmap based on data, request a demo of our platform. Our team is excited to show you how to quantify and lower your technical debt.

Common Pitfalls of App Modernization Projects

In today’s market environment, the ability to quickly take advantage of new technological capabilities is of paramount importance to a company’s ability to maintain or enhance its competitive position. That’s why for many businesses, the modernization of their legacy application portfolio has become not just a high priority but an existential necessity.

The problem such organizations face is that the legacy apps they depend on for some of their most essential business processes actually hinder their ability to keep pace with rapidly changing technological and marketplace conditions. Stefan Van Der Zijden, VP Analyst at Gartner, puts it this way:

“For many organizations, legacy systems are seen as holding back the business initiatives and business processes that rely on them… application leaders must look to application modernization to help remove the obstacles.”

The Importance of Application Modernization

Legacy apps are typically structured as monoliths, meaning that the codebase is organized as a single, non-modularized unit that has functional implementations and dependencies interwoven throughout. Updating such code to interoperate with the cloud-native systems and resources that dominate today’s technological landscape is difficult, time-consuming, risky, and costly.

To overcome this difficulty, organizations must modernize their legacy monolithic codebases to convert them into modern, cloud-native applications that can easily integrate into today’s technological environment. 

Most companies have not only recognized this fact but are acting on it – in a recent study conducted by Wakefield Research, 92% of respondents said their companies are either currently modernizing their legacy apps or are actively planning to do so.

Challenges of Application Modernization

Although application modernization is now considered by many companies to be essential, getting it right can be difficult. According to the Wakefield study, 79% of application modernization projects fail to achieve their goals. 

Application modernization efforts have historically been time-consuming and costly: the typical modernization project lasts 16 months and costs about $1.5 million—and more than a quarter of Wakefield survey respondents (27%) say their projects took two years or more. In a recent survey, 93% of respondents characterized their modernization experience as “extremely or somewhat challenging.”

Related: App Modernization Challenges That Keep CIOs Up at Night

App Modernization Pitfalls to Avoid

Let’s take a look at some common pitfalls that can, if you fail to avoid them, add your project to that 79% failure rate:

1. Inadequate Management Support

Buy-in by an organization’s executive management team is indispensable to the legacy app modernization success. In the Wakefield survey, both executives and architects cited a lack of “prioritization from management” as a major factor that “stopped modernization projects in their tracks.”

If a company’s executive management isn’t on board with the necessity for legacy app modernization and with the ROI such projects can be expected to yield, the budget, personnel, and other required resources either won’t be supplied at all or won’t be maintained at an adequate level. 

When changing marketplace conditions cause the organization to readjust its priorities, modernization projects can sometimes lose the management focus and budgetary support needed for success. To avoid that happening, you must be prepared to make and remake the case for the business utility and ROI of your modernization efforts as marketplace conditions continually evolve.

2. Failure to Adequately Address Cost Concerns

Nearly 50% of both executives and architects in the Wakefield study agreed that securing the needed budget and other resources is the most difficult step in implementing a modernization project. That’s typically because the executives who control the purse strings haven’t been given reliable information that convinces them that the return on the not-inconsiderable investment required for such projects will be great enough to justify the financial and business risks.

By conducting a  data-driven assessment of your legacy application estate, you can provide your organization’s management team with accurate, quantified data that makes the business case for investing the budgetary and other resources an app modernization project will require.

3. Misalignment Between Business and Technology Teams

The Wakefield study reports that 97% of survey respondents expected that someone in their organization would push back against modernization proposals. Such objections commonly occur when various stakeholders are not on the same page. Here are some typical reasons why that may occur:

  • The risk seems too great—Because legacy app modernization involves significant changes to critical systems, there is a definite degree of risk attached to such efforts. In the absence of trustworthy information and specific, well-developed plans that mitigate the risk factors, business and IT stakeholders may be reluctant to accept the risks to business operations that a modernization project represents.
  • Stakeholders fear large-scale change—When legacy applications are modernized, some associated workflows will usually also change. Well-established processes may be altered, and workers may need to be retrained or reassigned. Such developments introduce levels of uncertainty and instability that business stakeholders may be wary of.
  • Stakeholders fear losing their role—The business process changes that arise from app modernization efforts may threaten the traditional roles of some stakeholders or seem to relegate their perspectives and concerns to a lower priority level.

To avoid having such concerns become sources of pushback, stakeholders should be presented with a well-developed, data-driven modernization plan that addresses their unique issues.

4. Failure to Accurately Set Expectations

In the Wakefield study, this was the #1 reason given by respondents who started modernization projects they didn’t complete. Areas of particular concern include unrealistic expectations relating to budget and schedule requirements and anticipated project results such as improvements in engineering velocity and application innovation. 

To overcome this obstacle you need to be able to supply stakeholders with accurate, quantified data regarding the complexity of the task and the timeframe and budget that will be required for completing it.

In addition, you must ensure that your modernization methodology can produce the results you promise. Companies often make the mistake of thinking that just moving an application to the cloud will provide an acceptable degree of modernization. That’s not the case. Such a migration (often called a “lift and shift”) retains all the disadvantages of the application’s monolithic architecture

True modernization only occurs when the app is not only migrated to the cloud but is refactored from a monolith to a microservices architecture. Only then will you fully reap the benefits that make a modernization project worthwhile.

5. Failure to Make Required Organizational Structure Changes

App modernization is far more than just a technical exercise. IBM puts it this way:

“A cultural transformation is also imperative. Organizational roles, business processes and IT skills must evolve and advance for your cloud migration and application modernization to be a success.”

According to Conway’s Law, the organizational structure of a software development group must align with the structure of the application they intend to produce. When that doesn’t happen, an app modernization project is headed for trouble. Software engineer Alex Kondov is adamant that “you can’t fight Conway’s Law,” and supports that declaration with this observation:

“Sadly, often a company’s structure may not support the system it wishes to create. Time and time again, when a company decides that it doesn’t apply to them they learn a hard lesson.”

6. Inadequate Skills or Training

A legacy app modernization project is a complex process that requires a level of expertise many companies don’t have in-house. So it’s no surprise that almost a third of respondents to the Wakefield survey cited a lack of worker skills or training as a key obstacle to success. 

Yet, in today’s job market, hiring and retaining highly skilled software developers can be a time-consuming and costly process. You can reduce this requirement by providing your development team with modernization tools that embody skills your developers may lack.

7. Lack of Intelligent Tools

In the Wakefield survey, this was the #1 reason cited by software architects for app modernization failures. The process of refactoring monolithic legacy apps to convert them to a cloud-native microservices architecture is a highly complex undertaking that may require unraveling tens of millions of lines of code to expose hidden functionalities and dependencies in the codebase. 

Doing this using an essentially manual approach may require many months or even years of developers’ time, and even then the risk of an unsatisfactory outcome is extraordinarily high. On the other hand, the use of a state-of-the-art, AI-enabled, automated modernization tool can speed up the process by orders of magnitude while all but eliminating the risk factors that plague manual efforts.

Related: The Easy Way to Transition from Monolithic to Microservices

App Modernization Starts with Using the Right AI Tech

Many companies attempt to modernize applications using general-purpose design, analysis, and performance monitoring tools. But these have proven to be inadequate for the task. 

What’s needed is a tool that’s specifically designed for modernization, with advanced AI capabilities that allow it to comprehensively analyze monolithic applications, reveal hidden dependencies and functional implementations, and benchmark the levels of technical debt, complexity, and modernization risk associated with each app. It should also be able to substantially automate the process of restructuring complex monolithic apps into microservices.

vFunction provides just such an automated, AI-empowered tool. The vFunction Assessment Hub measures the complexity, technical debt load, and modernization risk of each app. The vFunction Modernization Hub then automates about 90% of the process of refactoring monolithic codebases into microservices. 

To see first-hand how vFunction can help you avoid the pitfalls that have wrecked so many application modernization efforts, request a demo today.

Don’t Let Technical Debt Stymie Your Java EE Modernization

Part 3 in the Uncovering Technical Debt series: An Intellyx BrainBlog for vFunction. Check out part one here. Check out part two here.

When Java Enterprise Edition (Java EE) hit the scene in the late 1990s, it was a welcome enterprise-class extension of the explosively popular Java language. J2EE (Java 2 Enterprise Edition, as it was called then) extended Java’s ‘write once, run anywhere’ promise to n-tier architectures, offering Session and Enterprise JavaBeans (EJBs) on the back end, Servlets on the web server, and Java Server Pages (JSPs) for dynamically building HTML-based web pages.

Today more than two decades later, massive quantities of Java EE code remain in production – only now it is all legacy, burdened with technical debt as technologies and best practices advance over time.

The encapsulated, modular object orientation of Java broke up the monolithic procedural code of languages that preceded it. Today, it’s the Java EE applications themselves that we consider monolithic, fraught with internal dependencies and complex inheritance hierarchies that add to their technical debt.

Modernizing these legacy Java EE monoliths, however, is a greater challenge than people expected. Simply getting their heads around the internal complexity of such applications is a Herculean task, let alone refactoring them.

For many organizations, throwing time, human effort, and money at the problem shows little to no progress, as they reach a point where some aspect of the modernization project stymies them, and progress grinds to a halt.

Don’t let technical debt stymie your Java EE modernization initiative. Here’s how to overcome the roadblocks.

Two Examples of Java EE Technical Debt Roadblocks

A Fortune 100 government-sponsored bank struggled with several legacy Java EE applications, the largest of which was a 20-year-old monolith that contained over 10,000 classes representing 8 million lines of code.

Replacing – or even temporarily turning off – this mission-critical app was impossible. Furthermore, years of effort on analysis in attempts to untangle the complex internal interdependencies went basically nowhere.

The second example, a Fortune 500 financial information and ratings firm, faced the modernization of many legacy Java EE applications. The company made progress with their modernization initiative, shifting from Oracle WebLogic to Tomcat, eliminating EJBs, and upgrading to Java 8.

What stymied this company, however, was its dependence on Apache Struts 1, an open-source web application framework that reached end-of-life in 2013.

This aging framework supported most of their Java EE applications, despite introducing potential compatibility, security, and maintenance issues for the company’s legacy applications.

Boiling Down the Problem

In both situations, the core roadblock to progress with these respective modernization initiatives was complexity – either the complexity inherent in a massive monolithic application or in the complex interdependencies among numerous applications that depended on an obsolete framework.

Obscurity, however, wasn’t the problem: both organizations had all the data they required about the inner workings of their Java EE applications. Both companies had their respective source code, and Java’s built-in introspection capabilities gave them all the data they required about how the applications would run in production.

In both cases, there was simply too much information for people to understand how best to modernize their respective applications. They needed a better approach to making decisions based upon large quantities of data. The answer: artificial intelligence (AI).

Breaking Down the Roadblocks

When such data sets are available, AI is able to discern patterns where humans get lost in the noise. By leveraging AI-based analysis tooling from vFunction, both organizations got a handle on their respective complexity, giving them a clear roadmap for resolving interdependencies and refactoring legacy Java EE code.

The Fortune 100 bank’s multi-phase approach to Java EE modernization included automated complexity analysis, AI-driven static and dynamic analysis of running code, and refactoring recommendations that included the automated extraction of services into human-readable JSON-formatted specification files.

The Fortune 500 financial information firm leveraged vFunction to define new service boundaries and a common shared library. It then merged and consolidated several services, removing the legacy Struts 1 dependency in favor of a modern Spring REST controller. It also converted numerous Java EE dependencies to Spring Boot, a modern, cloud native Java framework.

The Business Challenges of Technical Debt

Both organizations were in a ‘for want of a nail, the kingdom is lost’ situation – what seems like a relatively straightforward issue stymied their respective strategic modernization efforts.

When such a roadblock presents itself, then all estimates about how long the modernization will take and how much it will cost go out the window. Progress may stop, but the modernization meter keeps running, as the initiative shows less and less value to the organization as the team continues to beat their head against a wall.

Not only does morale suffer under such circumstances, but the technical debt continues to accrue as well. In both situations, the legacy apps were mission-critical, and thus had to keep working. Even though the modernization efforts had stalled, the respective teams were still responsible for maintaining the legacy apps – thus making the problem worse over time.

The Intellyx Take

During the planning stages of any modernization initiative, the teams had hammered out reasonable estimates for cost, time, and resource requirements. Such estimates were invariably on the low side – and when a roadblock stops progress, the management team must discard such estimates entirely.

Setting the appropriate expectations with stakeholders, therefore, is fraught with challenges, especially when those stakeholders are skeptical to begin with. Unless the modernization team takes an entirely different approach – say, leveraging AI-based analysis to unravel previously intractable complexity – stakeholders are unlikely to support further modernization efforts.

It’s important to note the role that vFunction played. It didn’t wave a magic wand, converting legacy code to modern microservices. Rather, it processed and interpreted the data each organization had available (both static and dynamic), leveraging AI to discern the important patterns in those data necessary to make the appropriate decisions that resulted in timely modernization results. Considering the deep challenges these customers faced, such results felt like magic in the end.

Five Reasons Tech Debt Accumulates in Business Applications

Almost any organization that develops software for internal or external use will sooner or later have to face the issue of technical debt. That’s especially the case if the organization depends on legacy apps for some of its critical business processing. Because technical debt makes it extremely difficult for a company to maintain and update its apps to meet changing requirements, it can significantly diminish the organization’s ability to achieve its business goals. A report from McKinsey puts it this way:

“Poor management of tech debt hamstrings companies’ ability to compete.”

But before a company can deal effectively with its technical debt, there are several questions it needs to answer: what, exactly, is technical debt, why is it so problematic, where does it come from, and what can be done about it? Let’s take a brief look at these issues.

What is Technical Debt?

TechTarget explains technical debt this way:

“Software development and IT infrastructure projects sometimes require leaders to cut corners, delay features or functionality, or live with suboptimal performance to move a project forward. It’s the notion of ‘build now and fix later.’ Technical debt describes the financial and material costs that come with fixing it later.”

And the financial costs can be substantial. The cost to companies of “fixing” their technical debt is about $361,000 for every 100,000 lines of code, and that number is even higher for Java applications.

As with financial debt, an organization can carry its technical debt load for a while. But because it severely limits the ability of apps to integrate into today’s cloud-based technological ecosystem, sooner or later the debt must be dealt with. Otherwise, the company won’t be able to keep up with the rapidly evolving demands of its marketplace.

Where Technical Debt Comes From

In a recent Ph.D. dissertation, Maheshwar Boodraj of Georgia State University determined that there are five major sources of technical debt in software development projects. Let’s look at each of them.

1. External Factors

This technical debt arises from outside the organization due to conditions or events that are beyond its control. For example, developers may inadvertently introduce technical debt into their applications by using or integrating with external technologies the team doesn’t control and may not fully understand. Technical debt can also be introduced by contractors who work to standards that are different from those adopted by your organization.

2. Organizational Factors

Deficiencies in a company’s organizational structure or practices often generate technical debt. Here are some factors that may cause that to happen:

  • Misalignment between business and technical stakeholders. Business representatives, who may not fully understand the technology, sometimes exert undue influence over technical seo decisions. Requirements statements may be incomplete, constantly changing, or worse, missing altogether. Additionally, developers may feel they don’t have the freedom to push back against requirements that violate their budgetary, schedule, or programming practices constraints.
  • Inadequate resources. Technical debt can result when developers lack the financial, human, or technological resources they need. An inadequate budget can limit the acquisition of needed talent or technical tools, leading developers to take short-sighted shortcuts.
  • Inadequate leadership. Technical debt may result when leaders don’t provide a clear vision, careful planning, and a long-term rather than short-term focus. Such conditions often lead to high staff turnover, resulting in a loss of institutional knowledge that will inevitably be reflected in the codebase.
  • Not prioritizing technical debt. Technical debt will persist if the organization fails to devote enough time and resources to managing it.
  • Unrealistic schedules. When the pressure of meeting unrealistic delivery schedules causes developers to take shortcuts, the incorporation of significant amounts of technical debt is inevitable.

3. People Factors

Software development teams function most effectively when members understand and abide by appropriate coding standards and best practices. To consistently implement those standards, team members need the requisite skills, experience, training, and commitment. 

If any of these are missing due to inexperience, inadequate leadership, bad team dynamics, self-serving attitudes among team members, or low morale due to stressful conditions, the team won’t have the cohesion necessary to successfully manage technical debt.

4. Process Factors

Effective development teams consistently follow a process that enforces appropriate standards and practices. Minimizing technical debt requires a process that enables:

  • Adequate focus on both business requirements and non-functional requirements such as usability, reliability, scalability, performance, and security
  • Appropriate coding standards that ensure proper code reviews, adequate documentation, comprehensive testing and QA, and ongoing refactoring as necessary
  • Proper definition of the minimum viable product (MVP) for each release
  • Adhering to the Continuous Integration and Continuous Delivery (CI/CD) paradigm
  • Avoiding morale-draining team dynamics, such as too frequent or over-extended meetings, lack of proper communication among and between teams, and inflexible procedures that make responding to unanticipated conditions difficult or frustrating for team members.

5. Product Factors

An application that has a substantial amount of technical debt will usually be marked by one or more of these characteristics:

  • Complex Code that’s difficult to understand, maintain, and upgrade
  • Duplicated Code that appears in multiple places in the codebase
  • Bad Code Structure resulting from the use of inappropriate development frameworks, tools, or abstractions
  • Undisciplined Reuse of Existing Code (including code from open-source libraries) that may be carrying its own load of technical debt
  • Monolithic Architecture that is by nature difficult to understand, adapt, or refactor
  • Ad Hoc Code Fixes added over time (and often inadequately documented) as bug fixes or new feature implementations
  • Poor Architectural Design resulting in a codebase that’s fragile, complex, opaque, not easily scalable, difficult to maintain, and too inflexible to integrate into the modern technological ecosystem

Related: How to Measure Technical Debt for Effective App Modernization Planning

Techniques for Managing Technical Debt

The place to start in managing your technical debt is to first understand what you want to accomplish. Then you must ensure that you have the organizational structure, people resources, and proper coding standards and practices to get you there. Finally, you’ll need to implement a process for refactoring your existing apps. Let’s take a brief look at each of these issues.

What Needs to be Accomplished

In essence, eliminating technical debt in existing apps is about refactoring those apps from the monolithic structure that typically characterizes legacy code into a cloud-native microservices architecture, thereby producing  a codebase that can easily be adapted, upgraded, and integrated with other cloud resources.

How Your Organization and Teams Need to Change

In 1967 computer scientist Melvin Conway articulated what’s come to be known as Conway’s Law:

“Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.”

Since the microservices your teams will implement consist of small units of code that do a single task and operate independently of one another, you’ll want to structure your software development organization as a group of small teams, each loosely coupled to other teams, and each having full responsibility for one or more microservices.

The Refactoring Project

Start by analyzing your application portfolio to determine the architectural complexity and technical debt load of each app. You can then prioritize which apps should be refactored (modernized) and in what order. 

That analysis will also provide the information you need to determine the budget, schedule, and required team member skill sets and experience levels for the project. And that information will, in turn, give you the ammunition you need to secure the required budget and resources.

Related: Using Machine Learning to Measure and Manage Technical Debt

Expect Pushback Against Dealing With Technical Debt

Don’t be surprised if you receive some pushback from other stakeholders in your organization. In a recent survey of senior IT professionals, 97% expected organizational pushback to app modernization proposals. As the survey report declares,

“Organizational pushback can hamstring projects before they start.”

The survey respondents were primarily concerned about cost, risk, and complexity. And that concern is warranted—modernization projects cost an average of $1.5 million, require, on average, 16 months to complete, and 79% of them fail to meet expectations.

That’s why being able to present comprehensive and accurate information about those cost, risk, and complexity factors to senior management is so important—it’s the best way to give them confidence that the benefits of dealing with the technical debt in your application portfolio will far outweigh the costs and risks of the project.

You Need the Right Analysis Tool

Trying to manually assess a legacy application portfolio that may contain multiple applications, some with perhaps millions of lines of code, is a losing proposition. That process is in itself so time-consuming and potentially inaccurate as to inspire little confidence in the estimates produced. 

What’s needed is an automated, AI-enabled analysis platform that can perform the required static and dynamic analyses of your applications in a fraction of the time required to do the job manually.

And that’s exactly what vFunction provides. The vFunction Assessment Hub can quickly and automatically analyze complexity and unravel dependencies in your apps to assess the level of technical debt and refactoring risk associated with each app and the portfolio as a whole. Then, the vFunction Modernization Hub can automatically refactor your complex monolithic apps into microservices, substantially reducing the timeframe, risks, and costs associated with that endeavor.

To see first-hand how vFunction can help you manage your technical debt, request a demo today.

The Strangler Architecture Pattern for Modernization

For companies that depend on legacy applications for critical business processing, modernizing those apps to make them compatible with today’s technologically sophisticated cloud ecosystem is crucial. But because most legacy apps are monolithic, updating them can be a difficult, time-consuming, and risky process. 

A monolithic codebase is organized as a single unit that has function implementations and dependencies interwoven throughout. Because a change to one part of the code can generate unexpected side-effects in other parts of the codebase, any update has the potential to cause the app to fail in unpredictable ways.

Yet, if these legacy apps are to continue fulfilling their business-critical missions, they must have the flexibility and adaptability necessary for keeping pace with the ever-evolving requirements of a fast-changing marketplace and technological environment. What’s needed is a means of encapsulating any changes to the legacy code so that only the targeted function is affected.

The Strangler Fig Architecture pattern meets that need. It allows legacy apps to be safely updated by replacing each function with an independent microservice. This enables developers to incrementally modernize specific functions without impacting the operation of other portions of the app.

What is the Strangler Fig Pattern?

Martin Fowler, Chief Scientist at Thoughtworks, coined the term in 2004. He noticed that strangler fig seeds, which germinate in the upper branches of other trees, send down roots that surround and eventually strangle their host tree. In effect, the strangler fig kills the original tree and takes its place.

Fowler saw this as a metaphor for how a large, monolithic software application could be modernized by surrounding it with a new superstructure of microservices that, over time, strangles and replaces the original app. A microservice is a small, self-contained codebase that performs only one task and replaces a single function or service in the legacy app. It can be updated without affecting other parts of the app. 

As new microservices are added over time, they take over the functions of the original codebase one by one until the functionality of the legacy app is entirely replaced by microservices. At that point, the original app has been fully “strangled” and can be decommissioned.

Related: Migrating Monolithic Applications to Microservices Architecture

Why the Strangler Fig Pattern is Ideal for Application Modernization

Faced with the daunting prospect of replacing or rewriting their portfolio of legacy apps, some companies settle for simply migrating them, pretty much as-is, to the cloud. But though that approach may yield some benefits, it falls far short of true modernization. That’s because a monolithic codebase in the cloud is still monolithic, and retains all the detrimental characteristics of that architecture.

The Strangler Fig Pattern enables true legacy app modernization by allowing you to replace the functions of the original app one at a time without having to rewrite the entire app all at once. As key functions are re-implemented one by one as microservices, the app continues to function and can be fully transformed without ever going offline.

A key element of the strangler paradigm is the use of an interface layer, called a façade, between the original app and its microservices superstructure. All communications to and from the legacy app go through the façade, which includes feature flags that you can set to dynamically control whether the original code for a function or its microservice replacement is live.

This approach provides some major advantages:

1. Allows incremental updating

If you elect to entirely rewrite a legacy app, you can’t use the new system until the rewrite (and all testing) is complete. The strangler approach allows you to incrementally add features and capabilities without disrupting the operation of the app or taking it offline.

2. Enables quicker modernization

As each new microservice is added, the benefits it provides, such as increased adaptability, flexibility, scalability, and performance, take effect immediately. As IBM notes,

The great thing about applying this pattern is that it creates incremental value in a much faster timeframe than if you tried a “big bang” migration in which you update all the code of your application before you release any of the new functionality.

3. Minimizes risk

Any attempt to replace or upgrade a large monolithic app all at once will almost certainly introduce new bugs that can cause significant downtime once you bring the new codebase online. But, as Bob Reselman of Red Hat explains,

“Small failures are easier to remedy than large ones, hence the essential benefit of the Strangler pattern.”

Because the strangler approach incorporates changes in small steps, with each new microservice being thoroughly tested before going live with the app, downtime due to new bugs can be almost totally eliminated.

4. Allows you to choose the pace of modernization

Since the app is never taken offline, you can implement the modernization project at a pace that’s comfortable for your team (and budget).

5. Allows easy and seamless rollbacks

Rolling back a change that isn’t working correctly is easy. Each new microservice deployment can be quickly and cleanly reversed simply by setting feature flags appropriately.

6. Eliminates the need to maintain two separate codebases

New functions are implemented as microservices that surround the legacy codebase; the original app is never changed. Since any needed changes (including those that correct bugs in the legacy code) are made only to the microservices superstructure, the original codebase need not be separately maintained.

7. Enhances QA

Because microservices can be run in parallel with the original code for QA purposes, each change can be comprehensively tested in the app’s production environment before it goes live with the app.

How Refactoring With Strangler Fig Helps You Avoid Destructive Coding Anti-Patterns

As we’ve seen, the Strangler Fig Pattern provides an ideal template for modernizing legacy apps. However, some other widely used software design patterns yield far more negative results. These are called, appropriately, anti-patterns. Martin Fowler notes that the term was coined in 1995 by programmer Andrew Koenig, who described it this way:

“An antipattern is just like a pattern, except that instead of a solution it gives something that looks superficially like a solution but isn’t one.”

According to Fowler, anti-patterns are extraordinarily dangerous because they initially fool developers into thinking they are appropriate solutions to common software coding problems, only to reveal their detrimental consequences later when the damage has been done. As software engineer Kealan Parr declares:

“In software, anti-pattern is a term that describes how NOT to solve recurring problems in your code. Anti-patterns are considered bad software design… They generally also add “technical debt” – which is code you have to come back and fix properly later.”

Parr lists some of the more common anti-patterns. They include:

Spaghetti Code

This anti-pattern is often encountered in monolithic legacy apps. The term describes a codebase that has little or no structure. There’s no modularization, and function implementations and dependencies are intermingled throughout the code, just like strands of spaghetti on a plate. As a result, the logical flow of the application is extremely difficult to understand. Parr calls it a maintenance nightmare:

“You will constantly break things, not understand the scope of your changes, or give any accurate estimates for your work as it’s impossible to foresee the countless issues that crop up when doing such archaeology/guesswork.”

Because of these characteristics, updating or adding features to spaghetti code is an extraordinarily difficult and risky process.

Golden Hammer

This anti-pattern derives its name from a quote attributed to Abraham Maslow:

“I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail.”

That’s a human tendency to which software developers are as prone as anyone else. When they have a high level of competence with, love for, or comfort with particular coding tools, languages, or architectures, they naturally seek to apply them across the board, even in situations where they are not the best options. The result can be highly detrimental—as Parr says, “Your whole program could end up taking a serious performance hit because you are trying to ram a square into a circle shape.”

Boat Anchor

Boat anchors do nothing but retard the progress of the vessel to which they are attached. And that’s exactly what the boat anchor anti-pattern does with software. The term describes code modules that developers insert into the codebase to implement functions not currently needed or used. They do so because they think those functions might be needed later.

This too, says Parr, is a maintenance nightmare. Developers who are new to the codebase, or who have not worked with it for some time, will have a hard time identifying boat anchor modules and figuring out whether they impact the logical flow of the program or are entirely superfluous. There’s a real possibility of your developers spending significant amounts of time and effort on understanding and debugging modules that literally do nothing.

Dead Code

This term describes code that, unlike boat anchors, implements functions that are not only used in the application but which may be called frequently from many different places in the codebase.

The problem is that it’s not clear what this code is doing or why it’s needed. Perhaps it had an important function at some point, but now the issues it was created to solve no longer exist. On the other hand, it could be crucial for handling infrequent edge or boundary conditions that current developers haven’t yet run into but eventually will. Because you can’t be sure why it’s there, you don’t dare to remove it. So, it remains in the codebase as a time-waster and generator of confusion for the developers who have to deal with it.

Proliferation of Code

This anti-pattern occurs when there are objects in the code that seem to exist only to invoke other more strategic objects. These are essentially useless “middleman” objects that provide no additional value, but only add an unnecessary level of abstraction, and therefore confusion, to the code. Such objects should be bypassed and removed to make the code more easily understandable.

God Object

This is sometimes called the “Swiss Army Knife” anti-pattern. It describes objects that are accessed by many other objects in the codebase for a multitude of different and often unrelated purposes. Such objects are problematic because they violate the Single Responsibility principle of coding, which says that every class, module, or function should do only one thing. According to software architect Thanh Le, they are “hard to unit test, debug and document” and can be a “maintenance nightmare.”

Strangler Fig to the Rescue!

Refactoring a legacy app according to the Strangler Fig Pattern will remove anti-patterns such as these from the codebase almost automatically. For example, Strangler Fig refactoring eliminates spaghetti code by re-implementing legacy app functions as a set of independent, single-task microservices that can be easily understood, maintained, and upgraded. 

Similarly, code that does too much is replaced by individual microservices with precisely specified, single-purpose functionality, while unneeded modules are not re-implemented at all. And Golden Hammer technology choices can be eliminated by implementing new microservices using a carefully chosen modern technology stack.

Best Practices to Implement the Strangler Fig Pattern

How can you maximize the benefits of the strangler fig paradigm in modernizing your legacy apps? Here are some best practices:

1. Automate the process using an AI-based modernization platform

As industry insider Oliver White has said,

“Large monolithic applications need an automated, data-driven way to identify potential service boundaries.”

Manual analysis of a monolithic codebase with millions of lines of code is a time-consuming, error-prone process. An automated, AI-based analysis platform can perform that task quickly, comprehensively, and at scale. Using static and dynamic analyses, it can assess the monolithic codebase for technical debt, complexity, and risk; reveal functional flows and dependencies; identify service domain boundaries; and quantify the amount of effort that will be needed to refactor the app.

That information will allow you to determine:

  • the negative impact of your legacy apps’ technical debt on your ability to innovate
  • the ROI that can be realized from modernizing some or all of your legacy apps
  • which applications should be modernized and in what order
  • which legacy app services should be re-implemented as microservices and which should not
  • the functional scope of each microservice
  • which functions are so similar or overlapping that they can be consolidated into a single microservice

Once the analysis phase is complete, a state-of-the-art modernization platform will be able to substantially automate the process of refactoring the monolithic code into microservices.

2. Pick the right starting point

For most companies, it’s not feasible—nor desirable—to modernize all their legacy apps at once. Instead, it’s best to start with those apps that have the greatest business value and which also have a high degree of technical debt. Then, for each app choose functions that have the highest impact on your business operations as the first to be re-implemented as microservices.

3. Pick the right ending point

It’s natural to want to replace all your legacy apps with microservices. But the costs of refactoring an entire legacy suite may exceed the benefits. In such cases, it might be best to continue using the original app for specific functions that are isolated, stable, and don’t require upgrading, while re-implementing as microservices any functions that must be easily upgradeable, or that interact directly with other systems or resources.

4. Follow an incremental, step-by-step process

The Strangler Fig Pattern provides its greatest benefits when it is applied incrementally, one microservice at a time. Avoid trying to modernize entire apps all at once. As one research paper succinctly advises:

“Start small and gradually evolve the system (baby steps).”

5. Implement new functionality only in microservices

When you begin the modernization process, you should freeze the legacy codebase and implement any new functionalities only through microservices. If you continue to make updates to the original app, you create two simultaneously evolving codebases, both of which must be supported, tested, and synchronized.

Related: Simplify Refactoring Monoliths to Microservices with AWS and vFunction

How AWS Migration Hub Refactor Spaces and vFunction Work Together

As an AWS Partner, vFunction provides an automated, AI-driven modernization platform that closely integrates with AWS Migration Hub Refactor Spaces to enable developers to quickly and safely transform complex monolithic Java applications into microservices and deploy them into AWS environments.

Refactor Spaces establishes, maintains, and manages the modernization environment, and orchestrates AWS services across accounts to facilitate the refactoring of legacy apps from monoliths to microservices. Refactor Spaces implements the Strangler Fig Pattern for the target application and allows developers to easily manage communication between services throughout the environment.

Developers begin the refactoring process by using vFunction to generate an automated, AI-based analysis that quantifies the complexity of monolithic legacy apps. Using both static and dynamic analyses, vFunction provides the detailed information regarding technical debt, complexity, and risk that’s required for developing a comprehensive refactoring plan that prioritizes which apps and services will be converted and in what order.

vFunction then automatically decomposes the monolithic apps into microservices. Using sophisticated, AI-driven static analysis, the vFunction platform analyzes architectural flows, classes, usage, memory, and resources to detect and expose critical business domain functions buried in the code and untangle complex dependency relationships.

See vFunction For Yourself

The vFunction platform is unique in its ability to make refactoring monolithic legacy apps into microservices as quick, easy, painless, and safe as possible. It easily handles codebases with tens of millions of lines of code, and can accelerate the modernization process by at least a factor of 15. If you’d like to see for yourself what it can do, please schedule a demo today.

How Much Does it Cost to Maintain Legacy Software Systems?

Many companies depend on legacy software for some of their most business-critical processing. But useful as they are, those applications can hold companies back from being able to keep pace with rapidly changing marketplace demands. The culprit is the technical debt of their legacy apps. 

Technical debt makes software difficult and risky to change, which increases the cost of maintaining legacy software systems. Dealing with technical debt in legacy applications can eat up substantial portions of a company’s IT budget and schedule, diminishing the organization’s ability to create new features and capabilities. In one survey of C-level corporate executives, 70% of respondents said that technical debt severely limits their IT operation’s ability to innovate.

Yet, many companies hesitate to commit themselves to modernizing their legacy apps. They often take the attitude that since their legacy systems are still functioning and doing the job they were designed to do, there’s no need to invest the time, money, and organizational effort that would be required to update them. But is that really the case? What are the true costs of maintaining legacy systems that may be approaching or beyond their technological expiration dates?

How Technical Debt Impacts the Cost of Maintaining Legacy Software Systems

TechTarget explains the concept of technical debt this way:

“Software development and IT infrastructure projects sometimes require leaders to cut corners, delay features or functionality, or live with suboptimal performance to move a project forward. It’s the notion of ‘build now and fix later.’ Technical debt describes the financial and material costs that come with fixing it later.”

As with financial debt, technical debt consists of two distinct components: principal and interest. Both must be paid off before the debt can be retired. In the financial sphere, the concepts of principal and interest are well understood. But how do those terms apply to technical debt?

The Principal on Technical Debt

The principal on your technical debt is the amount you’ll pay to clean up (or replace) the original substandard code and bring that application into the modern world. According to one research report, companies typically incur $361,000 of technical debt for every 100,000 lines of code in their software.

Just as with financial debt, you must eventually pay off the principal on your technical debt, and until you do, you’ll pay interest on it.

The Interest on Technical Debt

The interest on technical debt consists of the ongoing charges you incur in trying to keep flawed, inflexible, and outmoded legacy applications running as the technological context for which they were designed recedes further and further into the past. 

It’s an unavoidable cost of maintaining legacy systems. And those interest charges can be substantial—according to InformationWeek, U.S. companies are spending $85 billion every year on maintaining bad technology.

Related: How to Measure Technical Debt for Effective App Modernization Planning

Specific Legacy Software Maintenance Costs

According to Gartner, by 2025 companies will be spending 40% of their IT budgets on simply maintaining technical debt. But that’s not the worst of it. The direct financial cost of maintaining legacy systems is just the tip of the iceberg. There are other impacts on your company and its IT organization that may be even more significant. Let’s take a look at some of them.

Wasted Time

According to a survey by Stripe, out of a 41.1-hour average work week, the typical software developer spends 13.5 hours, or almost a third of their time, addressing technical debt. When developers were asked how many hours per week they “waste” on the maintenance of bad legacy code, the average of their answers was 17.3 hours. That means that developers typically believe they are “wasting” more than 42% of their work week on legacy software.

Lowered Morale

The fact is, most of today’s developers just don’t like working on legacy code or dealing with technical debt. They’re usually far more interested in working with modern programming languages, architectures, and frameworks. 

For many of them, spending significant amounts of time dealing with older, technically obsolescent applications can seem mind-numbing, unproductive, and frustrating. When Stripe asked about issues that negatively impact developers’ morale:

  • 78% named “Spending too much time on legacy systems”
  • 76% cited “Paying down technical debt”

The natural result of low morale on a development team is decreased productivity and increased turnover. With the U.S. currently experiencing a shortage of more than a million software developers, the costs for finding, hiring, and training replacements for unhappy employees can be significant.

Opportunity Costs

Not only does technical debt impose a direct financial cost on companies, but there is a very real opportunity cost as well—the time devoted to maintaining legacy applications is time that’s not being spent to develop the innovations that can propel a company forward in its marketplace. A recent Deloitte report highlights the importance of this issue:

The accumulation of technical debt adversely affects an organization’s ability to innovate and employ new technologies … which makes it harder for the organization to retain its market share, secure clients, and stay on track with market trends.

Other Indirect Costs

There are other costs of maintaining legacy software systems that, while perhaps not easy to quantify, are nevertheless quite real. These include:

  • Slow test and release cycles: Technical debt makes legacy apps brittle (easy to break) and opaque (hard to understand), which lengthens upgrade/test/release cycle times.
  • Inability to meet business goals: The inability to quickly release and deploy innovative new applications or features can cripple a company’s ability to meet its marketplace goals.
  • Security exposures: Legacy apps were not designed to modern security standards, and neither were the quick fixes, patches, and ad hoc workarounds that typically have been incorporated over time.

A report from McKinsey sums up the negative impact of technical debt this way:

“Poor management of tech debt hamstrings companies’ ability to compete.”

Overcoming the Challenges of Maintaining Legacy Applications

Continuing to spend your company’s time and resources on keeping “venerable” applications running is a losing proposition—the cost of maintaining legacy software systems will only increase over time. Instead, the key to maintaining the value these applications have for the organization is to modernize them to bring them into today’s technological universe. 

That means transforming the typically monolithic structure of these apps into a cloud-native microservices architecture. The result will be a codebase that has minimal technical debt, and that can easily be adapted, upgraded, and integrated with other cloud resources.

But the modernization process is not without its own challenges. The average app modernization project costs $1.5 million and takes about 16 months to complete. And after all that investment of time and resources, 79% of those projects fail. Speaking of companies that have not had the success they hoped for with their application modernization efforts, McKinsey reports that:

“In our experience, this poor performance is a result of companies’ uncertainty about where to start or how to prioritize their tech-debt efforts. They spend significant amounts of money on modernizing applications that aren’t major contributors to tech debt, for example, or try to modernize applications in ways that won’t actually reduce tech debt.”

This assessment points toward two critical elements of a successful legacy app modernization program:

  1. You must choose the right modernization strategy – it’s not enough to simply migrate legacy apps to the cloud. Instead, true modernization involves refactoring monolithic legacy codebases into microservices.
  2. To know where to start and how to prioritize your modernization efforts, you need comprehensive, quantifiable data concerning the complexity, risk, and technical debt of your legacy app portfolio.

Let’s look at this data requirement in a little more detail.

Related: Succeed with an Application Modernization Roadmap

Getting the Data You Need for Legacy App Modernization Success

As the McKinsey report indicates, without good data that allows you to assess which of your legacy apps need to be modernized, and in what order, your modernization efforts are likely to fall short. But asking developers to manually assess a legacy application portfolio that may contain multiple applications, some with perhaps tens of millions of lines of code and thousands of classes, is rarely a viable approach. 

The task of unraveling the functionalities and dependencies of a large, non-modularized, monolithic codebase is simply too complex for humans to perform effectively in any reasonable timeframe. As one IT leader told McKinsey,

“We were surprised by the hidden complexity, dependencies and hard-coding of legacy applications, and slow migration speed.”

Instead of a manual approach, what would serve you best is the use of an automated, AI-enabled analysis platform that can perform the required static and dynamic analyses of your legacy apps in a fraction of the time your developers would require. Such a solution will also provide the information you need to quantify the expected ROI of your modernization program.

The vFunction platform offers all those features and more.

To see first-hand how vFunction can help you modernize your legacy apps, schedule a demo today.