Category: Uncategorized

Application Modernization Trends: Goals, Challenges, and Insights

Application modernization continues to gain traction. According to Foundry’s State of The CIO Study 2023, modernizing applications and infrastructures remains the third-highest initiative for Chief Information Officers (CIOs). It is also among the top five factors driving IT investment dollars in 2023. In fact, 91% of CIOs expect their budgets to increase or remain the same. The funds are needed to address application modernization trends.

Although organizations have made progress in modernizing legacy systems, they still have work to do if they want to achieve the following top five business initiatives:

  • Improve operational efficiency
  • Increase cybersecurity defenses
  • Transform business processes
  • Enhance the customer experience
  • Increase profitability

The ongoing focus on modernization indicates that Kubernetes (K8s) and cloud platforms alone have not solved the problems of large legacy monoliths that cannot be easily lifted and shifted. In these cases, application modernization will require refactoring or rearchitecting.

Modernization is at the core of 2023’s number two priority—cybersecurity. Legacy systems present a significant risk. Not only are they unable to defend against modern attack vectors, but they contain old vulnerabilities that were never fixed. Cybercriminals actively scan potential targets for legacy systems that have unpatched vulnerabilities.

At the same time, outdated systems and monolithic architecture hinder business operations and user experience. Older technologies do not play well with advanced solutions. Transforming operations for improved efficiencies is the top priority for 45% of CIOs in 2023. In the current economic environment, more efficient processes are important for lowering expenses and protecting profitability.

Cloud migration plays a significant role in application modernization. While the cloud is not a prerequisite to modernization, many companies have made it part of their cloud strategy. Exactly how they combine to create a strategy depends on the organization.

Application Modernization Trends and the Legacy Dilemma

For most businesses, existing applications are still vital to business processes. They often support core functionalities and host essential data. Most organizations still use legacy systems because they are crucial to business operations. Dismantling such systems and building new ones would destabilize or disrupt business processes. 

Related: What is Application Modernization? The Ultimate Guide

Monolithic applications technologies, infrastructure, and architecture are more rigid than newer microservices architectures. The older technologies limit the IT teams’ ability to develop new features quickly and efficiently. Some legacy systems are already obsolete, making replacing them challenging or impossible. In such cases, the only alternative is modernizing applications.

How Companies View the Legacy Dilemma

In many ways, companies view legacy systems “as the devil they know.” They are usually an integral part of business operations, and the magnitude of changing out a core system is unfathomable. As long as the system functions, they are reluctant to risk disruption.

For many organizations, the solution resides in the cloud. If lifting and shifting monolithic applications to the cloud adds to the life of a legacy system, many companies are willing to integrate old code into cloud-based platforms. However, the strategy is not without challenges.

Addressing Lift and Shift Challenges

Old and new technologies do not merge seamlessly. It often requires APIs or middleware to allow the systems to coexist. Once operational, the systems may lack performance capabilities. These are just a few of the challenges of rehosting a legacy application in the cloud.

Incompatibility

It may be possible to lift and shift applications to the cloud, but some apps are not compatible. Identifying these specific apps helps determine how to handle them before the move. Rehosting applications in the cloud can also lead to performance and latency issues. Applications that depend on third-party software are also often unsuitable for the lift and shift method.

Inefficiencies

While rehosting may move a legacy application to the cloud faster, it may take longer to optimize the older technology. Some apps may also be unable to leverage cloud computing resources. Since legacy applications are not cloud-native, it may be challenging to run them efficiently. Other application modernization methods, such as refactoring or rearchitecting, can deliver a more cloud-native application.

Cost

Moving a legacy application to the cloud with minimal changes may appear to be the least expensive and lowest-risk option. However, the long-term costs could be immeasurable. Without a cloud-native environment, organizations may struggle to deliver competitive products, resulting in lower market share and few customers.

Even though the legacy application is operating in the cloud, it cannot take advantage of all cloud capabilities. Critical visibility may not be available, making it more difficult for IT to troubleshoot the application or defend against cyberattacks. When deciding how to best modernize applications, businesses need to evaluate both long- and short-term factors.

Security Issues

Cloud security depends on the individuals implementing it. On-premise security best practices are not the same as in the cloud. Organizations looking at their first cloud application often lack the expertise to secure a cloud environment. Finding the talent to fill that gap is a challenge.

Staffing shortages in the tech field continue. The US Bureau of Labor Statistics predicts that the need for cybersecurity personnel will increase by 35% between 2021 and 2031. Job openings for software developers will increase by 25% during the same ten years. Overcoming the challenges of finding and retaining the necessary talent is a formidable task to ensure a secure cloud environment.

Shifting Priorities 

A recent survey on the future of the cloud found that organizations that view moving to the cloud as a strategic part of their digital transformation achieved higher levels of innovation than their less strategic counterparts. The survey highlighted the value of maximizing cloud services. For example, those companies with cloud services that support advanced technologies such as artificial intelligence are 1.7 times more likely to receive increased value than businesses with a less mature infrastructure.

However, cloud-based transformation requires modernization. According to IBM, modernization amplifies the value of the cloud as much as 13 times if it is part of an end-to-end transformation. Even though 83% of executives agree that modernizing applications and data is critical to their business strategies, only 27% have modernized their workflows. 

As priorities shift, organizations are re-evaluating their modernization strategies. Aligning business, modernization, and cloud strategies enables companies to optimize their cloud services to utilize application modernization trends.

Creating a Cloud Strategy for Application Modernization 

Every business strategy should include a cloud strategy. Companies adopting a “cloud-first” policy need a plan for onboarding new and modernizing old workloads. As they look to develop strategies, businesses should consider implementing policies such as the following:

Modernizing Data

Gartner analysts predict that by 2025, at least 85% of companies will adopt the cloud-first principle. However, it won’t be easy to implement their digital strategies without cloud-native technologies. This rings true since the majority of enterprise workloads are not cloud-ready

Related: Q&A Series: The 3 Layers of an Application: Which Layer Should I Modernize First?

So how do workloads become cloud-ready? Modernizing data is about replacing legacy databases to be able to handle distributed and streaming data sources and sinks. In order to modernize the data layer, modernization experts recommend starting first with the business logic layer.

Migrating to a New Architecture

Another application modernization trend is embracing new architectures. Instead of shifting a legacy application to the cloud in its entirety, you can move some of its features to more efficient architectures. This enables faster development.

When modernizing any application architecture, leveraging architectural observability tooling is essential. This will pinpoint architectural hotspots and drift issues. Addressing these problems incrementally while moving to new architectures will solve such issues. It also addresses security, scalability, and reliability concerns and helps resolve issues with tolerance, capacity, and redundancy.

Turning Monolith into Microservices

Monolithic applications have a single large codebase. In contrast, microservices applications operate independently. Every feature or application handles one service. This transformation to microservices improves the development and deployment of updates and new features. Technology stacks become more flexible. Also, there’s minimal risk of downstream effects that comes with changes in the underlying code.

Moving to the Cloud

The cloud revolutionized digital experiences with innovations such as mobile payment. Clearly, most legacy applications need cloud modernization. Cloud-native platforms allow developers to leverage the principles and tools of the cloud environment. It becomes possible to deploy new digital workloads to cloud-native platforms.

Going Hybrid

In some cases, fully modernizing for the cloud is unnecessary. Depending on business goals and budgets, organizations can incorporate public, private, and hybrid clouds. For instance, if an application experiences usage spikes, a public cloud can minimize the spikes. It can scale appropriately to accommodate the spikes at lower costs. However, if there’s little or no financial gain from a complete migration, a hybrid cloud is another option. 

Incorporating Trends

Unless modernization is part of a cloud strategy, organizations will fail to realize its full value. Shifting legacy code to the cloud doesn’t provide the agility or resilience required in today’s competitive environment. Without application modernization, companies cannot address the 2023 trends impacting digital transformation.

How 2023 Trends Impact Application Modernization

Not all trends are positive. Ongoing labor shortages and cost-based decisions will hamper modernization efforts. Disruptive technologies will add pressure for cloud-native capabilities, and a lack of cultural change will allow technical debt to accumulate. These are just a few of the trends companies must address as they look to the future.

Finding Tech Talent

IBM’s study found that 45% of companies consider a lack of expertise as an obstacle to modernization. With less than 10% of employees having cloud or modernization experience, organizations need to look beyond new hires to acquire the expertise. Executives say financial constraints are the primary reason they lack experienced employees. 

  1. Recruiting talent is expensive. Despite recent staff reductions in the tech sector, finding people to fill open positions can still take four to six months. That assumes CIOs can find them. Gartner found that 86% of companies have encountered more competition for candidates in 2023. Stiff competition means higher wages at a time when money is tight, and inflation paints an uncertain economic outlook. 
  2. Retaining staff is critical. Garnter’s survey found 73% of CIOs worry about staff attrition. As demand continues to outpace supply, headhunters are looking to entice employees to change employers. Companies need to invest in their technical staff if they want to retain them.

Providing growth opportunities not only improves a business’s technology capabilities but also increases employee retention. Unfortunately, 43% of organizations cite budget constraints as the reason they fail to offer skills development. Another 38% say they are too busy to lose time to training, and 32% would rather hire new talent. 

Related: Why Organizations Are Adding App Modernization to CCOE

Deciding whether to recruit or retain depends on an organization’s skills gap. Rather than default to a set strategy, CIOs need to determine what in-house capabilities exist with a little upskilling and what expertise needs to be hired. CIOs should also consider modernization tools that can reduce the time individuals spend on low-value tasks.

Understanding Disruptive Technologies 

Knowing how disruptive technologies will impact business growth begins with modernization. New technologies such as artificial intelligence (AI), the Internet of Things (IoT), and virtualization all require modern applications operating in a cloud-native environment. Legacy systems will be too far removed to fit comfortably with emerging technology.

Artificial Intelligence

Generative AI uses AI to produce content. It acquires and synthesizes data to compose responses. For example, ChatGPT offers AI-powered chatbots that understand natural language, retain context, and deliver the most probable outcome. While generative AI is in its infancy, imagine how personalized customer experiences could be. Online shoppers could finally receive answers to questions such as 

  • Will this chair go with the rest of the room?
  • Which appliance is the best choice for my needs?
  • What goes with this shirt?

Answers to these questions can quickly dispel barriers to online purchases. However, organizations will need a modern infrastructure to take advantage of generative AI.

Internet of Things (IoT)

From drones to sensors, more devices are being deployed every day. Each device collects data that, when totaled, results in millions, even billions, of data points. Processing massive amounts of information requires cloud-based resources. It demands modernized applications that can turn data into valuable insights. 

When an agricultural enterprise invests thousands in IoT devices, it needs applications that can take advantage of cloud computing capabilities. Deploying atmospheric sensors across acres of farmland helps farmers know when conditions are right for planting and harvesting. Having the right foundation ensures the results will be comprehensive and timely.

Controlling Technical Debt

Organizations continue to collect technical debt. According to McKinsey, they are stuck in a vicious cycle where IT struggles to keep up with requirements—expediency rules how solutions are implemented. The landscape grows more complex with each less-than-optimum deployment.

Most companies are aware that technical debt is killing modernization efforts. What they may not realize is that 40% of IT is technical debt. For every project, companies pay an additional 10% to 20% to address technical debt. Among CIOs, 30% believe at least 20% of their new product budget is consumed by technical debt.

McKinsey’s research found that reducing technical debt has far-reaching impacts. Engineers could spend as much as 50% more time working on value-oriented products. They would spend less time addressing system complexities. Uptime would improve, and resiliency would become a reality. To move forward, businesses need to control their technical debt.

Reducing technical debt isn’t just an IT problem. It’s a cultural problem where expectations focus on fast and low-cost solutions. No matter the intentions, if the culture is more concerned with immediate results than long-term viability, technical debt will continue to accumulate. Without an application modernization plan, accumulated debt will weaken an organization, making it impossible to remain competitive.

Future Proofing the Enterprise

McKinsey recommends that organizations make budget allocations to control technical debt a strategic decision. It’s not just flagging funds for modernization. It’s managing those funds separately, creating an environment of accountability and transparency. Executives must incorporate modernization into their strategic plan and develop monitoring processes to hold everyone accountable.

For example, the accounting department desperately needs a fix and hounds IT for delivery. IT can cludge something together, but the solution only adds to its technical debt. IT could deliver a quick fix and then provide a solution that eliminates the associated debt. However, delivering the follow-up solution means the sales department will need to wait another two weeks for their update.

Traditional approaches would have IT deliver the quick fix and complete the sales update on time. The accumulating debt would be IT’s problem to fix while juggling the myriad of high-priority projects. In many cases, the correction never happens.

Under McKinsey’s system, the decision would be strategic. It would mean balancing the short-term gain against future modernization. It would require executives to back the appropriate strategic decision regardless of the immediate impact. 

Looking Beyond Cost

Although the majority of executives understand the toll technical debt inflicts on their businesses, they still consider cost as the primary factor when looking at application modernization. To future-proof their organizations, executives need to evaluate the opportunity costs as part of the cost analysis. What future capabilities will be lost if modernization doesn’t happen?

Moving technical debt considerations to the boardroom changes how application modernization happens. If a strategic objective is to use generative AI to improve customer experience, modernizing becomes part of the critical path. Updating older technology is woven into the business strategy to ensure that the use of generative AI happens. 

Identifying IT’s skill gaps allows companies to assess where to place their human resource dollars. It also enables businesses to find automated solutions that can free staff from time-consuming, repetitive work. The more comprehensive the talent pool, the better an enterprise can navigate the future.

Navigating the Future

vFunction’s solution helps organizations future-proof their applications. Its platform helps turn Java or .NET monolithic structures into microservices. Using AI-powered technology, the product provides IT departments with the ability to control architectural drift in a continuous modernization environment. Request a demo or watch the video to learn more about future-proofing your enterprise.

How Continuous Modernization Can Address Architectural Drift

As more organizations implement a shift-left approach to software development, architects are looking for ways to become part of a collaborative team. They can no longer deliver a design to development and walk away. With a continuous modernization approach, friction between what was planned and what was implemented disappears as teams work together to address architectural changes as early in the process as possible. 

Originally, the shift-left movement focused on security. Its goal was to create systems where security was part of the design rather than added later in the development process. The shift required software architects to consider security measures in their initial design. It meant testing earlier and addressing design limitations while development was just beginning.

The changing mindset added pressure on engineers to maintain visibility into an application’s architecture. Evolving security requirements often demanded changes in design. That created a problem. How do you change a design if you don’t know what the design is doing in production? Even more critical is how you control design changes in a continuous integration/continuous development (CI/CD) environment. Can continuous modernization help?

What is Continuous Modernization?

Continuous modernization not only extends the CI/CD process, but more importantly, it enables organizations to incrementally modernize software to minimize technical debt and architectural drift. It gives companies a path for improving security as architectural vulnerabilities appear. Unlike waterfall approaches, architecture updates are provided throughout the SDLC process, not deferred to future releases or never.

However, all software suffers from growing technical debt. Changes are based on expediency rather than design integrity. If not controlled, an application can deviate from its original infrastructure, making it difficult to locate and fix flaws. Understanding architectural drift is imperative to help teams leverage continuous modernization to minimize architectural erosion.

What is Architectural Drift?

Software evolves—sometimes by design, but often in response to business demands. Users want a new feature. The application needs better performance. Of course, delivery schedules are tight, requiring trade-offs. These decisions often result in technical debt and architectural drift.

Architectural drift results from the unchecked evolution of runtime software that leads to a lack of coherence and clarity in the software’s design. Dead code, class entanglements, and deep dependencies contribute to Brian Marick’s “big ball of mud” that prevents architects from observing how systems work in live environments. 

Related: Getting Leadership Buy-in on a Continuous Application Modernization Strategy

Unless engineers can see the architecture in operation, they cannot determine how far the software has drifted from its original design. They’ve lost control of the ship, and it’s drifting in open waters.

How Does Architectural Drift Become a Problem?

When ships drift, they go where the ocean takes them. Left unchecked, they go aground or succumb to the elements. The same can be said of architectural drift. Without correction, a system flounders. Its agility falters, and its viability fails. Like a ship, it succumbs to its environment.

Start with the Design

Architectural drift can begin before a developer writes a line of code. Designs that use tightly coupled structures with layered dependencies allow developers to rely on the infrastructure to maintain control. Function calls disappear into a maze that mysteriously delivers a result — almost like magic. If an error occurs, architects have few resources to help identify where the problem resides.

Even with distributed architectures, engineers can struggle. Microservices deployed across an application throw an error. How do architects determine if the error is isolated to a single instance? How do they determine what triggers the error? Without observability, resolution becomes time-consuming.

Add Changes Over Time

Not every software change adds to an application’s architectural technical debt. However, those that do pose a problem for engineers. During development, design changes may try to follow best practices for identifying deviations from the original architecture specification. But shift happens.Whether requirements change or expediency calls, architectural erosion is the result. Modifications are made that alter the original design. If left unchecked, these changes accumulate and increase the architectural drift of an application.

Mix in a Lack of Visibility

While visibility tools abound for applications, these same tools are not available at the architecture level. Without tools to analyze, track and correct architectural erosion, architects can’t adequately define how far the design has drifted. Even with better tools, engineers need observability capabilities.

Unlike monitoring, observability takes a proactive look at the internal state of the software during runtime. Its goal is to identify critical anomalies in a system’s architecture. To be effective, observability must be consistent, holistic, and automated. But what exactly is observability?

What is Observability? 

Observability tries to describe the internal state of software through external outputs. Observability typically uses three data sources known as the three pillars of observability. 

  • Logs. Record what happens within an application, including its infrastructure.
  • Metrics. Defined data points used to flag unusual behavior.
  • Traces. Provide visibility of step-by-step code execution.

Events are often considered a fourth pillar. These customized records highlight potential problems through pattern identification,

While the data sources provide useful information, they have their limitations. Using observability tools that combine the information into comprehensive views delivers a realistic picture of system operations. Unfortunately, not every system component has the same level of visibility tools.

Why is Architectural Observability the Answer?

System architects have worked with “big balls of mud” for decades. They have struggled to untangle threads and assess problems through indirect means. The difficulty with architectural observability is poor tool creation.

Systems Are Complex

Monolithic structures have given way to distributed architectures that include microservices and containers. Sustained visibility across a distributed system often requires multiple tools that deliver data in varying formats. What’s missing is data consolidation that delivers a holistic view.

Data is Complex

Sorting through volumes of data recorded in real-time presents a challenge. Even with automated tools, data management can become time-consuming. If the data is not persisted, timely extraction may be needed for an accurate view over time. These factors complicate tool creation. Data consistency is crucial to identifying drift.

Related: Shift Left to Avoid Technical Debt Disasters: The Need for Continuous Modernization

A further complication to consistency is data separation. In collaborative environments, having access to all pertinent data may not be an issue; however, in situations where data silos exist, incomplete information makes a comprehensive evaluation impossible.

Business is Complex

Tying architectural events to business outcomes isn’t easy. Without an understanding of business complexities, architects may focus on the wrong metrics and fail to collect crucial data for analysis. For example, engineers may place a high priority on determining why CPU usage increases when a set of microservices runs. Executives may consider increasing page load times as more significant because slower load times can translate into lost revenue for an eCommerce site.

Observability allows engineers to see how released software deviates from its original design. It requires the right tools and a plan to address architectural drift.

How to Address Architectural Drift

Observability needs tools to establish a baseline and set thresholds. Best practices should proactively detect and correct abnormal behaviors that lead to architectural drift. The planned outcome should deliver a process that is consistent, holistic, and automated.

#1: Establish a Baseline

Baselines establish a starting point. They should include service topologies that itemize common and core business services. They should identify critical components that are routinely audited to detect deviations from the baseline. Automating the process allows architects to track those ad-hoc changes that impact an application’s infrastructure.

#2: Identify Service Exclusivity

As part of baselining, measure service exclusivity. Knowing how many independent classes and service resources are in use highlights dependencies that increase architectural debt. This baselining can help identify possible debt early before it becomes a paralyzing problem.

#3: Set Thresholds

Architects can establish thresholds for proactive observations of a system’s architecture. Automated systems enable engineers to schedule observations, configure measurements, and start analyses. Automating the collection of key metrics expedites the evaluation process for faster resolution of pending issues.

#4: Automate the Process

Automating data collection is only the first step in delivering comprehensive observability. Automation must turn that data into valuable insights that enable architects to minimize architectural erosion. The landscape is too complex and changes too rapidly for manual processing.

Continuous Modernization and Architectural Drift

Architects must be proactive in a continuous modernization environment. They must shift left to be more engaged in the initial design, whether refactoring, rearchitecting, or starting new. Their job persists through an application’s lifecycle because they have the tools needed to observe and correct architectural drift.

vFunction’s Continuous Modernization Manager provides architects with the tools needed to overcome observability challenges. Its automated modernization solution provides a holistic approach that delivers insights based on consistent data. The manager allows architects to:

  • Shift left into the development cycle
  • Monitor, detect and identify architecture drift
  • Set baseline and thresholds
  • Send alerts when critical 

vFunction enables engineers to remain proactive through an application’s lifecycle. It helps maintain the architectural integrity of the software as it is continuously modernized. To see how we can help with your application modernization needs, request a demo.

Q&A Series: Building a Business Case for Application Modernization

How to get buy-in and budget for successful application modernization

Bob Quillin, chief ecosystem officer at vFunction, is an industry expert when it comes to application modernization. He often finds that the biggest hurdle to application modernization is developing a compelling business case to take on such a complicated task that can be costly and frequently fraught with risk. Business leaders need justification for budget allocation, yet most architects lack data to prove it’s essential or determine the resources needed to pull it off successfully.

A business case must be backed with data — data that is easy to understand and see the bigger picture. Business leaders don’t often want to be told something has to be done, preferring to be shown why it needs to be done and what is likely to happen if it isn’t. This is precisely what Bob and his team at vFunction do with their Assessment Hub and Assessment Hub Express tools. These solutions were built specifically for architects who want a simplified way to build a data-driven application modernization plan and need to create a strong business case to do so.

In this interview with Bob, we discuss the key inhibitors to successful modernization projects and how to develop a rock-solid business case for application modernization. He will also discuss how the vFunction Assessment Hub works and the benefits it brings for gaining rapid visibility into the health of the entire application estate.

Q: Tell me why building a business case for application modernization is so difficult.

Bob: One of the key inhibitors to modernization projects being successful is that it’s hard to build a business case to get them approved and off the ground. Traditionally, architects haven’t had a clear understanding of what exactly needs to be done, how long it will take, or how complex it will be, all critical components of a business case. But now, we can provide the science and data to build the case.

Q: What happens without a business case?

Bob: Oftentimes, nothing. Modernization projects are either delayed, never start, or end in failure. If you aren’t looking inside and analyzing the application architecture, you can’t accurately predict the value of modernization. Without the business case, you can’t have a successful modernization project and vice versa. 

In our 2022 study with Wakefield Research of 250 technology professionals, we found, “Failure to Accurately Set Expectations” was the number one reason given by respondents who started modernization projects they didn’t complete. Areas of particular concern include unrealistic expectations relating to budget and schedule requirements and anticipated project results such as improvements in engineering velocity and application innovation.”

With vFunction’s suite of application modernization solutions, architects and senior engineers can understand the technical debt in each app, pull that out and fix the prob, modernize it, and continually monitor and fix new issues to prevent technical debt from accumulating again.

Q: How do architects know they have a technical debt problem?

Bob: From a qualitative level, application leaders have a strong sense that they are carrying a heavy load of technical debt by the symptoms they go through every time they add a new feature. How long does it take? If it’s taking your team more and more time each sprint, you know you have an issue. It can also become harder to add new features because it’s more difficult to figure out where to add the new feature and integrate it. It will also become much more difficult and time-consuming to test. One small change in a monolith requires you to test the entire application because you don’t know the downstream implications. 

With monoliths, there is a high degree of dependencies, so release cycles expand, engineering velocity decreases, and eventually, your ability to compete and add new features slows. You’ll often see a backlog of feature requests you have in your project management and tracking systems that you can’t keep up with. It significantly hampers the Dev team’s capacity and production. 

Q: How can all of this lead to increased costs?

Bob: If you have a spike in demand (requiring more CPU and memory resources) or it’s an important application, it becomes difficult to scale a monolithic application without buying bigger machines or cloud instances types or shapes. On the flip side, cloud-native architectures are more horizontally scalable with greater elasticity. The two factors I hear architects complain about are that they can’t scale and costs go up. 

There are costs to run the app, even after a lift and shift. If you break down that monolithic application into microservices, you can be more efficient in how you apply the wider variety of cloud instances to that particular need. Release velocity increases, testing speed cycles increase, and there is elasticity and scalability, all at a lower cost. These are all reasons to break down monolithic apps into microservices.

Q: If there is such a need and so much to gain, why is it so hard to get an application modernization project off the ground?

Bob: We surveyed 250 application teams and looked at the top reasons for failure. The number one reason was a failure to set expectations for leaders and architects accurately. At a minimum, they need to understand what application modernization will solve in terms of technical debt, how long it will take, and what it will cost. They need an ROI — what they will get in terms of reducing technical debt and increasing innovation.

Q: Why is this information so elusive?

Bob: Currently, the only information available is mostly qualitative. In other words, they just use their experience and best guesses, bring in consultants or a system integrator, or outsource the whole thing. It isn’t based on any science, automation, or best practices. When they don’t have data to measure architectural technical debt, they can’t assess the complexity of the app or the risks of changing it. They need observability to understand dependencies, dead code, and what’s common and not common code — all the things that make up the architecture. Without it, it’s nearly impossible to make a plan on how to rearchitect an architecture you don’t understand. A classic business mantra is if you can’t measure, you can’t improve it. When you can measure it, you can decide how to improve it, what to fix, how long it will take, how complex it will be, and what the cost will be. 

Q: vFunction directly addresses these challenges with the vFunction Assessment Hub. Is there anything else out there like it?

Bob: There are other tools that analyze source code to report back how it is written, any number of code “smells” or poor software engineering practices, and cyclomatic complexity that tracks the number of linearly-independent paths through the application. Source code analysis is different from architectural analysis, which looks at how an app is built and constructed versus how it is written. It’s easier to track little source code errors along the way versus fixing the architecture itself, but you never truly modernize the application if you don’t address the underlying root cause of technical debt. 

I like to think of it like a house. When you’re updating a kitchen or adding a bathroom to a massive house, you have to figure out the architectural components before you can tie new plumbing into the old. All of the plumbing is interdependent, so if you make a mistake with one piece of plumbing, it can impact the entire plumbing system. The monolithic application is the house. Think how much easier it is if you had the opportunity to break up a mansion into individual casitas, or microservices in this analogy. Adding on or fixing plumbing issues is now much more manageable, with fewer dependencies to worry about.

Security is another issue. If you add a piece of open-source code that has a known vulnerability, you can scan that library or code prior to adopting that component. People call source code tracking “checking for code smells,” which means looking for different errors or anti-patterns that developers have added along the way that can be detected and fixed. Security analysis tools will pick up security issues. Static analysis tools pick up code smells. At vFunction, we actually use these in our own software development process, but what’s missing typically for development teams are measurement and tracking tools for architectural technical debt 

Q: What is an example of an architectural issue?

Bob: Dead code is a good example of an architectural issue. You can’t analyze it just by looking at the source code. We define dead as code that is reachable but no longer used. We have found over the years that there are large swaths of obsolete or “zombie” code hidden in most monoliths. Something could call it, but nothing does. Maybe the service is now obsolete. It’s just sitting out there and not being used. 

Architectures drift over time, new features get added or maybe replaced, and older features are no longer used by customers and have been replaced. You’re carrying that technical debt forward. No one wants to touch it because maybe they weren’t there when it was written and don’t know what to do with it, or they fear if they touch it, there will be negative downstream effects.

Q: How is innovation impacted by technical debt?

Bob: Modernization requires funding, people resources, and time, diverting resources from other priorities, so you will have to build a business case and get approval. The question is, do you want to keep doing what you’re doing and add more and more features and ignore technical debt, or finally reduce that debt and start investing the savings in innovating for the future? 

Over time, there is a tipping point where all that technical debt weighs down the application and the organization to the point of breaking. The calculus here is that every dollar spent on technical debt is a dollar you aren’t spending on innovation. If you want to innovate more, you have to reduce your technical debt to get the ROI from modernization. 

Q: How does vFunction help increase innovation?

Bob: vFunction will measure and help you manage architectural technical debt and, then highlight the upside if you reduce it — how much ROI you’ll have in terms of innovation. This translates directly to dollars. In fact, this is one of the first factors we look at: how much architectural debt are you carrying, and how is it impacting your ability to innovate? Instead of theories and “gut feels,” we can give you numbers — here’s your ROI and TCO. Now, you have a business case that clearly illustrates to decision-makers that “if we want to increase business velocity, customer satisfaction, and innovation, this is how we have to apply our resources to bring down technical debt.”

Q: Tell me more about vFunction Assessment Hub and how it gathers and presents the data.

Bob: Our Assessment Hub analyzes technical debt based on two factors: complexity and risk. Then, those are synthesized into a technical debt score. 

Complexity is based on the degree of class entanglements within your application. It measures the density of the dependencies and how complex the application will be to modernize. 

Risk is based on the length of the dependency chains in the application. We measure the dependency chains, and how those interrelate downstream so you know that if you make one change here, what are the consequences down the line? This is the bane of the monolith — if you make one change, you have to test the whole thing. With microservices, you have a high degree of exclusivity of the resources you use, and they are constrained within the boundaries of the microservice. The risk of making a change is much lower. 

Q: How does that technical debt score inform decisions?

Bob: We set the technical debt score per application, and you can compare it with other applications. We also show how much effort it will take to fix it in terms of time and people. Architects can use that as a way to say, “Here is our technical debt and what it’s costing us. If we reduce the tech debt, here’s the innovation that occurs.” 

The Assessment Hub scores the top 10 technical debt classes and presents a prioritized list of where to start. For example, if you fix only these 10 things, we can say what effect that will have — the ROI. This kind of insight helps people understand not only a top-level debt score, but the components of complexity and risk, and then where to start. We also analyze the architecture to identify aging platforms and frameworks you will want to update. 

Q: What comes next once you understand the scope and magnitude of the modernization project?

Bob: Ideally, you would then jump into the vFunction Modernization Hub to do it. We’ve given you the path, now go at it with an AI-enabled approach.

The vFunction Assessment Hub Express is designed to be fast. You download and run it yourself from our website. We use this as part of our own analysis to help customers get started on modernization. It gives them a snapshot of what it will take. They can then say, “This will be a complex project or wow, this isn’t that hard, and we can do 100-200 classes ourselves.” Sometimes they don’t need full-blown modernization, or they can just lift and shift because they aren’t carrying much debt anyway. You have to make sure there are clear business reasons to modernize. 

Q: So, modernization isn’t always necessary? How do you know?

Bob: For an application to warrant modernizing, it needs to be an application that is actively used and critical to the business. If there’s a large backlog of features to add or requests to fix it, you know there’s a strong demand to extend or improve the application that isn’t being met. But if there’s no business reason to extend, there’s no business IP or competitive value, or if it can be easily replaced by a modern SaaS alternative, refactoring or rearchitecting may not be the best path.

Modernization is complicated, so you have to make sure there is a viable business reason to modernize. Only the business can understand and prioritize if it’s something they want and need to do.

Q: We assume modernization is to cloud-enable a legacy application. Is this true?

Bob: Partly. We are looking at more besides improving the architecture. One of the greatest motivators besides velocity, scalability and elasticity is reducing costs and increasing efficiency. When people move to the cloud, they’re also looking to lower infrastructure spend. Cloud services can be less expensive if you architect your application to use them more efficiently. 

But it’s also about reducing licensing costs. Legacy licensing for databases and Java itself is expensive. If you move an application to the cloud, like an enterprise monolithic application you’re still carrying significant licensing costs. Most customers want to reduce licensing costs across the board. So, the cost of running an expensive monolithic application and the related licensing costs are also common motivators to modernize. 

Q: Have vFunction users reduced costs this way?

Bob: Yes. If you look at our Trend Micro study, they took a monolith they lifted and shifted, modernized to microservices, and reduced their cloud instance spend by 50%. 

Legacy applications that are lifted and shifted to the cloud require some of the most expensive services in the cloud. If you’ve just taken an older app to the cloud, it’s running with high CPU and memory requirements, plus the most expensive data layer services as well. A lift and shift application has not been optimized for the cloud. In addition, a lift and shift doesn’t reduce licensing costs, combined with high infrastructure costs. Unless it’s cloud-native, you can’t take advantage of the efficiency of the cloud. Vertical scaling is very expensive. You will get more horizontal scalability and elasticity with microservices, and it is much more cost-effective.

Q: Last question. You mentioned the importance of presenting the data the Assessment Hub gives in a way that’s easy to understand. Can you explain how the Hub does that?

Bob: The Assessment Hub is graphical. You see a visualization of complexity, risk, debt (with a score), components of tech debt, and the number of aging frameworks. Then, you can analyze TCO, the benefits of fixing the identified debt, and the resulting increase in innovation. 

We present this in different ways on a dashboard. For example, there is a pie chart view of innovation versus technical debt. It graphically represents how much you’re spending on innovation versus technical debt, with percentages for added detail. You can ask, “What are the benefits if I fix this technical debt, and how much will my TCO improve?” You can also download this information as a shareable pdf.

If you’re not ready to modernize, you can let Assessment Hub run over time to monitor trends. Our latest feature is a multiple-application dashboard. It provides compelling observability across multiple apps at the same time to visualize technical debt for a large application estate so you can compare and prioritize. 

You can scope the project to know what you’re getting into and if it’s worth it. If an architect doesn’t have the data, they can’t have a viable, believable business plan. The goal is to get people thinking about this as early as possible.

Bob Quillin not only serves as Chief Ecosystem Officer at vFunction but works closely with customers helping enterprises accelerate their journey to the cloud faster, smarter, and at scale. His insights have helped dozens of companies successfully modernize their application architecture with a proven strategy and best practices. Learn more at vFunction.com. 

Related Posts:

Q&A Series: The 3 Layers of an Application: Which Layer Should I modernize first?

How to avoid mistakes when modernizing applications

As Chief Ecosystem Officer at vFunction, Bob Quillin is considered an expert in the topic of application modernization, specifically, modernizing monolithic Java applications into microservices and building a business case to do so. In his role at vFunction, inevitably, he is asked the question, “Where do I start?”

Modernizing can be a massive undertaking that consumes resources and takes years, if it’s ever done at all. Unfortunately, because of its scale, many organizations postpone the effort, only deciding to tackle it when there is a catastrophic system failure. Those who do dive into the deep waters of modernization frequently approach it from the wrong perspective and without the proper tools.

Where to start with modernizing applications boils down to which part of the application needs attention first. There are three layers to an application: The base layer is the database layer, the middle layer is the business logic layer, and the top layer is the UI layer. 

In this interview with Bob, we discuss the challenges facing software architects and how approaching modernization by tackling the wrong layers first inevitably leads to failure, either in the short term or the long term.

Q: What do you see as the most common challenge enterprises face when deciding to modernize?

Bob: Most organizations recognize they have legacy monolithic applications that they need to modernize, but it’s not as easy as simply lifting the application and shifting it to the cloud. Applications are complicated, and their components are interconnected. Architects don’t know where to start. You have to be able to observe the application itself, how the monolithic application is constructed, and what is the best way to modernize it. Unfortunately, there isn’t a blueprint with clear steps, so the architect is going in blind. They’re looking for help in any form – clear best practices, tooling, and advice. 

Q: With a 3-tier application, you’d think there are 3 ways to approach modernization, but you say this is where application teams often go wrong.

Bob: Many technology leaders want to do the easiest thing first, which is to modernize the user interface because it has the most visual impact on their boss or customers. If not the UI, they frequently go for the database where they store the data perhaps to reduce licensing costs or storage requirements. But the business logic layer is where business services reside and where the most competitive advantage and intellectual property are embedded. It isn’t the easiest layer to begin with, but by doing so, you make the rest of your modernization efforts much easier and more lasting.

Q: What’s the problem starting with the UI layer?

Bob: When you start with the UI, you actually haven’t addressed modernization at all. Modernization is designed to help you increase your engineering velocity, reduce costs, and optimize the application for the cloud. A new UI can have short term, visual benefits but does little to target the underlying problem – and when you do refactor that application, you’ll likely have to rewrite the UI again! Our recommendation is to start with the business logic layer — this is where you’ll find the services that have specific business value to be extracted. This allows you to directly solve the issue of architectural technical debt that is dragging your business down. 

Q: What’s the value of extracting these services from the monolith?

Bob: In the past, everything was thrown together in one large monolithic “ball of mud.” The modernization goal is to break that ball of mud apart into smaller, more manageable microservices in the business logic layer so that you can achieve the benefits of the cloud and  then focus on micro front-ends and data stores associated with each service. By breaking down the monolith into microservices, you can modernize the pieces you need to, and at that point, upgrading the UI and database becomes much easier.

Q: Tell me more about the database layer and the pitfalls of starting there.

Bob: The database layer should only be decomposed once as it often stores the crown jewels of the organization and should be handled carefully. It also a very expensive part of the monolith, mostly because of the licensing, so it often seems like a good place to start to cut costs. But decomposing the database is virtually impossible to do without understanding how the business logic is using it. What are the business logic domains that use the database? Each microservice should have its own data store, so you need the microservice architecture designed first. You can’t put the cart before the horse. 

Data structures are sensitive. You’re storing a lot of business information in the database. It’s the lifeblood of the business. You only want to change that once, so change it after decomposing your business logic into services that access independent parts of the database. If you don’t do the business logic layer first, you’ll just have to decompose the database again later. 

Q: Explain how breaking down monoliths in the business logic layer into microservices works with the database layer.

Bob: Every microservice should have its own database and set of tables or data services, so if you change one microservice, you don’t have to test or impact another. If you decompose the business logic with the database in mind, you can create five different microservices that have five different data stores, for example. This sequencing makes more sense and prevents having to cycle on the database more than once. 

Also, clearly, you want to organize your access to the database according to the business logic needs versus the opposite. One thing we find when people lift and shift to the cloud, their data store is typically using the most expensive services that are available from cloud providers. The data layer is very expensive, especially if you don’t break down the business logic first. If you modernize first, you can have more economical data layer services from the get-go. If you start decomposing your business logic first, you have more efficient and optimized data services that save you money and are more cloud-native, fitting into a model going forward that gives you the cloud benefits you’re looking for. Go to business logic first, and it unlocks the opportunities. 

Q: What’s the problem with starting modernization with whatever layer feels the most logical?

Bob: Modernization is littered with shortcuts and ways to avoid dealing with the hardest part, which is refactoring, breaking up and decomposing business logic. UI projects put a shiny front on top of an older app. If that’s a need for the business, that’s fine, but in the end, you still have a monolith with the same issues. It just now looks a little better. 

A similar approach is taking the whole application and lifting and shifting it to the cloud. Sure, you’ve reduced data center costs by moving it to the cloud, but you’re delaying the inevitable. You just moved from one data center (your own) to a cloud data center (like AWS). It’s still a monolith with issues that only get bigger and cause more damage later. 

Q: How does vFunction help with this?

Bob: Until vFunction, architects didn’t have the right tools. They couldn’t see the problem so they couldn’t fix it. vFunction enables organizations to do the hard part first, starting with getting visibility and observability into the architecture to see how it’s operating and where the architectural technical debt is, then measuring it regularly. Software architects need that visibility. If we can make it easier, faster, and data-driven, it’s a much more efficient path so that you don’t have to do it again and again. 

Q: How do you focus on the business logic with vFunction? 

Bob: If you’re going to build microservices, you need to understand what key business services are inside a monolith; you need a way to begin to pull those out and clearly identify them, establish their boundaries, and set up coherent APIs. That’s really what vFunction does. It looks for clusters of activities that represent business domains and essential services. You can begin to detangle and unpack these services, seeing the services that are providing key value streams for the business that are worth modernizing. 

You can pull each out as a separate microservice to then run it more efficiently in the cloud, scale it, and pick the right cloud instances that conform to it. You can use all of the elasticity available in containers, Kubernetes, and serverless architectures through the cloud. You can then split up a database to represent just that part of the data domain the microservice needs, decomposing the database based on that microservice. 

Q: Visibility is key here, right?

Bob: Yes. The difficulty is having visibility inside the monolithic application, and since you can’t see inside it or track technical debt, you have no idea what’s going on or how much technical debt is in there. The first step is to have the tools to observe and measure that technical debt and understand the profile, baseline it, and track the architectural patterns and drift over time. 

Q: How does technical debt accumulate, and what can architects do about it?

Bob: You may see an application that was constructed in a way that maybe wasn’t perfect, but it was viable, and over time it erodes and gathers more and more architectural technical debt. There are now more business layers on top of it, more code that’s copied, and new architects come in. There are a lot of permutations that happen, and that monolith becomes untenable in its ability to fulfill changing requirements, updates, and maintenance. Monoliths are very brittle. Southwest Airlines and Twitter know this all too well.

But this is where vFunction comes in to help you understand where that architectual technical debt is. You can use our Continuous Modernization Manager and Assessment Hub to provide visibility and tracking, and then our Modernization Hub helps you pull apart and identify the business domains and services.

Q: What infrastructure and platforms support the business logic?

Bob: Application servers run the business logic. Typically, we find Oracle WebLogic, IBM WebSphere, Red Hat JBoss, and many others. Monoliths are thus dependent on these legacy technology platforms because the business logic is managed by these application server technologies. This means that both the app server and database are based on older and more expensive systems that have older licensed technology written for another architecture or domain 10-20 years ago. 

Q: What are the key benefits of looking at the business logic layer first?

Bob: By starting with the key factors that compose your architecture including the classes, resources, and dependencies, you start to deintirfy the key sources of architectural technical debt that need to be fixed. Within this new architecture, you want to create high levels of exclusivity, meaning that you want these components that contain and depend on the resource that are exclusive to each microservice. The primary goal is to architect highly independent of each other. 

Q: And what does that mean for the developer?

Bob: For the developer, it increases engineering velocity. 

In a monolith, if I want to change one thing, I have to test everything because I don’t know the dependencies. With independent microservices, I can make quick changes and turns, testing cycles go down, and I can make faster, more regular releases because my test coverage is much smaller and my cycles are much faster. 

Microservices are smaller and easier to deal with, requiring smaller teams and a smaller focus. You can respond faster to customer feature requests. As a developer, you have much more freedom to make changes and move to a more Agile development environment. You can start using more DevOps approaches, where you’re shifting left all of the testing, operational and security work into that service because everything is now much more contained and managed. 

Q: What does it mean from an operational perspective?

Bob: From an operational perspective, if the application is architected with microservices, you have more scalability in case there’s a spike in demand. With microservices and container technology, you can scale horizontally and add more capacity. With a monolith, if I do that, I might only have a certain amount of headroom, and I can’t buy a bigger machine. With memory and CPU limits, I can’t scale any further. I may have to start replicating that machine somewhere else. By moving to microservices, I have more headroom to operate and meet customer demand. 

So, developers get higher velocity, it’s easier to test features, there’s more independence, and operationally, they get more scalability and resilience in the business. These benefits aren’t available with a monolith. 

Q: This sounds like it requires a cultural shift to get organizations thinking differently about modernization.

Bob: Definitely. From a cultural perspective, you can start to adopt more modern practices and more DevOps technologies like CI/CD for continuous integration and continuous delivery. You’re then working in a modern world versus a world that was 20-30 years ago. 

As you start moving monoliths to microservices, we hear all the time that engineering morale goes up, and retention and recruiting are easier. It’s frustrating for engineers to have a backlog of feature requests you can’t respond to because you have a long test cycle. The business gets frustrated, and engineers get frustrated, which leads to burnout. Modernizing puts you in a better position to meet business demands and, honestly, have more fun. 

Q: Are all monoliths bad?

Bob: No, not all monoliths are bad. When you start decomposing a monolith that results in many microservices and teams, you should have a more efficient, scalable, higher-velocity organization, but you also have more complexity. While you’ve traded one set of complexities for another you are getting extensive benefits from the cloud. With the monolith, you couldn’t make changes easily, but now, with microservices, it’s much easier to make changes since you are dealing with fewer interdependencies  While the application may be more efficient it may not be as predictable as it was before given its new native elasticity. 

As with any new technology, this evolution requires new skillsets, training, and making sure your organization is prepared with the relevant cloud experience with container technologies and DevOps methodologies, for instance. Most of our customers already have applications on the cloud already and have developed a modern skillset to support that. But, with every new architecture comes a new set of challenges. 

Modernization needs to be done for the right reasons and requires a technical and cultural commitment as a company to be ready for that. If you haven’t made those changes or aren’t ready to make those changes, then it’s probably too soon to go through a modernization exercise. 

Q: What is the difference between an architect trying to modernize on their own versus using a toolset like vFunction offers? 

Bob: Right now, architects are running blind when it comes to understanding the current state of their monolithic architectures. There are deep levels of dependencies with long dependency chains, making it challenging to understand how one change affects another and thus how to untangle these issues. 

Most tools today look at code quality through static analysis, not architectural technical debt. This is why we say vFunction can help architects shift left back into the software development lifecycle. We provide observability into their architecture which is critical because architectural complexity is the biggest predictor of how difficult it will be to modernize your application and how long it will take. If you can’t understand and measure the architectural complexity of an application, you won’t be able to modernize it. 

Q: Is vFunction the first of its kind in terms of the toolset it provides architects?

Bob: Yes. We have built a set of visibility, observability, and modernization tools based on science, data, and measurement to give architects an understanding of what’s truly happening inside their applications. 

We also provide guidance and automation to identify where the opportunities are to decompose the monolith into microservices, with clear boundaries between those microservices. We offer consistent API calls and a “what if” mode — an interactive, safe sandbox environment where architects can make changes, rollback those changes, and share with other architects for greater collaboration, even with globally dispersed teams. 

vFunction provides the tooling, measurement, and environment so architect and developers have a proactive model that prevents future monoliths from forming. We create an iterative best practice and organizational strategy so you can detect, fix, and prevent technical debt from happening in the future. Architects can finally understand architectural technical debt, prevent architectural drift, and efficiently move their monoliths into microservices. 

Bob Quillin not only serves as Chief Ecosystem Officer at vFunction but works closely with customers helping enterprises accelerate their journey to the cloud faster, smarter, and at scale. His insights have helped dozens of companies successfully modernize their application architecture with a proven strategy and best practices. Learn more at vFunction.com.

Related Posts:

Technical Debt Risk: Review SWA, the FAA and Twitter Outages

How all organizations can learn to spot the warning signs

Until recently, “technical debt” was a term reserved mostly for those in IT, specifically, architects, developers, app owners and IT leaders. Thanks to a few high-profile outages at Southwest Airlines, the FAA, and Twitter, technical debt has made it to mainstream media outlets which are reporting on how unchecked technical debt contributed to failures that impacted millions of people and to some, cost billions and immeasurable damage to their brand reputation. 

While these organizations likely wish their hardships weren’t blasted to the public, perhaps the spotlight will serve as a warning to the thousands of other organizations that could share similar fates if they don’t act soon to address their technical debt. As more organizations shift applications to the cloud to enhance their capabilities, the problem will only increase. 

In this Q&A with Bob Quillin, the chief ecosystem officer at vFunction, we take a deep dive into how technical debt happens, the risks of ignoring it, and how it can be efficiently managed before it leads to major issues.

Q: Can you give me a little background on each of these system failures? Let’s start with Southwest Airlines.

Bob: Southwest Airlines has actually had two failures recently. The most recent issue was a firewall failure. Even the vice president said they never know when a failure is going to happen, and fixes have been slow. This is the definition of technical debt risk.

The first outage impacted tens of thousands of travelers during the peak holiday season. At first glance, you might think it was just an unfortunate coincidence, but technical debt typically is most dangerous when there is stress on the infrastructure, so the timing of this crash wasn’t random.

Over the last few years, Southwest has been called out for its outdated systems that need upgrading. How they interact with crew members and guests is very manual and phone-based. Even the pilots and crew have been saying the systems are antiquated. Most major airlines have fully modernized their business processes, whereas Southwest has not. They knew they had technical debt, but they weren’t addressing it. This scenario is typical of most technical debt issues we see in the marketplace. You keep kicking the can down the road and crossing your fingers. 

When you start seeing technical debt being used in both financial and mainstream press as reasons for high-profile business outages, it raises the visibility of the business impact, where the IT and engineering teams aren’t the only ones talking about it. When it causes a billion-dollar outage that impacts millions of people, it’s more obvious even to business people outside of IT. It can affect application availability, firewalls, data security, and more. When one card falls, others fall too, and you never know when it’s going to happen or how many systems it will impact.

Q: What about the FAA?

Bob: The FAA failure was an issue around a damaged database file and is a good example of an aging app infrastructure. With an older monolithic architecture like the FAA has, a single issue in one location has a ripple effect all the way down, cascading to a greater issue. Had they broken down their monoliths into microservices, they would have had a more distributed architecture with greater survivability, so one outage wouldn’t cause others to shut down the system. 

The FAA knew they had an outdated application that needed to be modernized, but it was risky to change. Everyone is adding more features and trying to patch it here and there, so one problem causes so many others. 

Q: Is there a way to reduce that risk?

Bob: You have to directly measure and manage technical debt to try to understand the risk — what are the dependency chains, the downstream effects? To stay in front of that you need a technical debt analysis strategy to track architectural drift and monitor how components are dependent and interrelated. Then you can begin isolating where problems occur, and the blast area is smaller. A best practice is if there is a problem, you are able to isolate it to minimize the cascading effect. Southwest Airlines couldn’t handle the scale, but the FAA had one small problem that cascaded into a bigger issue. It’s why so many organizations are moving to a cloud-native architecture.

Q: Let’s talk about Twitter. It had less of a catastrophic impact, but it was at a minimum, an inconvenience for users.

Bob: The Twitter outage was attributed to a coding mistake. There was a lot of public discussion within the engineering teams sharing that the application has grown dramatically over the years, and it’s slow and hard to change. They traded velocity over performance, spending a lot of time trying to add more capabilities without fixing the technical debt. We see this mistake across many companies.

Twitter is now trying to make more structural changes, replacing old features with new ones, and realizing the code can’t change as quickly as the new management wants. They are trying to ramp up engineering velocity, but the applications weren’t built for that. 

With a cloud-native architecture, they could add those features more quickly with more agility, but the technical debt they’ve accumulated over the years makes it harder to make changes. They’ve taken on too much technical debt to adopt new features quickly, and the application has just become too brittle. Unfortunately, you can’t take a monolith and turn it into a cloud app magically.

Q: These are examples of technical debt risk at large organizations. Does technical debt apply to smaller companies as well?

Bob: Most definitely. If you take a look at the types of organizations we’ve just discussed, we have a 40-year-old major airline, a government entity that’s slower to modernize but has mission-critical applications, then a newer cloud-unicorn company that you’d think is technically advanced. All three share issues around technical debt that are formed for different reasons that caused high-profile issues that transcend from a technical problem to a business problem. 

What typically happens is that technical debt is only discussed inside of engineering and only surfaces when something catastrophic happens. But, all three examples are very visible, and they occur on a smaller scale at probably every company. 

Q: How can a company know they have a technical debt problem?

Bob: Technical debt issues cause many familiar symptoms, like a feature that didn’t come out on time, or you lost a key customer or lost out to a competitor, all of which are often related to your inability to respond quickly due to slow engineering velocity that’s dragged down by technical debt. You can see it occurring at a micro level that’s less visible than a total system crash. You lose a deal, a customer, or market share one drip at a time. All of those things can be because technical debt slows your ability to innovate and keep up with opportunities.

On the flip side, look at what happened to Zoom. Zoom took the pandemic as an opportunity and was able to race ahead of competitors. No one anticipated everyone going virtual. They had the agility to make those changes quickly because they were cloud-native. Other businesses were slower to respond.

What happens when pandemic-effect is over? Can you respond to the next opportunity? All those windows are built upon engineering velocity driving business agility. There is nothing worse for a CTO, senior engineer, or app owner than to have to explain to their CEO or CFO that the company can’t innovate and win because it doesn’t have engineering agility.

Q: So how do organizations typically approach the lack of engineering velocity or business agility?

Bob: Usually, they debate whether they should hire more people or less expensive resources or outsource it. They ignore technical debt and bolt on more and more features to keep trying to move faster. The problem with monoliths is there’s only so fast you can move. Having more people doesn’t always mean you can move faster. You can’t hire enough people or buy big enough machines to keep up. 

The only way to increase velocity to innovate faster is to rearchitect the product. With a monolithic architecture, you have fixed costs in terms of hardware and software infrastructure that are cost-prohibitive. We have one customer that couldn’t buy a bigger machine because it didn’t exist. Their only option was to break up the monolith into microservices to scale up. They could then afford to add resources where it helped the business, but they had more efficiency and applied the dollars they had to infrastructure licensing needs.

Q: Are budgets a significant component here?

Bob: The problem is that companies aren’t addressing technical debt because they don’t want to dedicate the resources for it – time, people, and money. They either need to add more resources or dedicate the time to fix it. Unfortunately, your resource budget isn’t likely to go up and will probably be reduced. So what do you do? 

You can just let things go and keep adding more stuff to it to make it work at the expense of fixing the debt. That works out fine until the rules change. For example, Elon comes in and says we’re going to get rid of this and add this, and then engineers say they can’t make those changes that are required to change the business model that way.

Q: So, there is a cost to carrying technical debt?

Bob: Absolutely. That’s where business planning comes in. You have to look at what technical debt is costing and build a business case to show there is ROI to modernize. How do you break out of this deadly cycle, where technical debt is going up, and innovation is going down? It requires a frank conversation. Before vFunction, there was nothing to build that business case so you could have the conversation.

Q: How does vFunction help build that business case for reducing technical debt risk?

Bob: Our goal is focused on using science and data to analyze your app, determine the most effective way to modernize it, and help you put together a business case. We tell you where to modernize, the reasons and risks, and the upside — you’re spending this percentage of your IT budget on technical debt and on innovation. We can provide those insights in just six months. 

Businesses of all sizes have to have the data, analysis and ability to understand what architectural changes they need to make to get that velocity and avoid outages that others are seeing. More importantly, you get the business velocity you need to get into a win-win situation — minimizing catastrophic events and creating a greater velocity.

Q: In the past, it was hard to quantify innovation, but vFunction can do that?

Bob: Yes. Our software puts numbers on what innovation means. Innovation is a goal, but what does your feature backlog look like in terms of features and new capabilities you want to add to your application? How much is that growing over time, and are those features working? 

If you can increase your feature velocity, that will give you a dollar amount on the other side. Will it add $1M to your bottom line? You can build a business case on feature velocity. You can also understand how much an outage would cost, or if you already have one, how fast you can make bug fixes. There is a cost to that. 

There is also a cost to run an app — high-cost hardware, software licensing, and database licensing. All have a compelling, hard dollar cost. You need a business case with a clear view of what you want to do, where you want to do it, and how long it will take, and make sure you can have a clear discussion about business value. 

Most modernization projects that have been successful have this full visibility into the advantages. That said, you have business-critical apps that need to keep running, and you can’t just flip the switch. There are a variety of best practices, like the Strangler Fig Pattern, to keep monolith alive while you modernize. It’s a risk-averse, programmatic, sequential way to move from an old pattern to a new one without having a drop in services. 

Q: How long does assessing technical debt risk take?

Bob: vFunction Assessment Hub is relatively quick, typically focusing on a core set of apps you determine are worth modernizing, that can be a handful or it could be hundreds that have a business value. Our Assessment Hub is an affordable, efficient and automated way to build the business case, taking less than an hour for one app or a few weeks for a larger application estate. 

Q: Once you understand the extent of your technical debt, then what?

Bob: vFunction Modernization Hub analysis is automated, but it involves active interaction with an architect through our Studio UI to refine and refactor the architecture. But a process that might take years to complete without vFunction takes only weeks or months with it and with higher-quality results. With Modernization Hub, you have the data and the understanding of how the architecture and dependencies improve or not with each change. 

Q: What are the costs and time associated with modernizing with Modernization Hub?

Bob: The cost and time are based on the scale of the app, so the Assessment Hub will tell you how long it will take. Some apps have millions of lines of code and tens of thousands of classes, so it takes more time. Our pricing and estimations are based on complexity and the number of classes within the app. With our service extraction capability, it’s a full, end-to-end cycle. We find a major value in visualizing the recommended service topology and refining the architecture from there. 

Q: What is the role of the architect here?

Bob: The architect stays in control, but we guide them. They can decide if they want to split out the services or combine them. We facilitate those decisions and provide guidelines and recommendations, but it’s important to use vFunction as the expert tool that helps them do their job more efficiently and clearly with observability and control on their end.

Q: Is modernization a one-and-done sort of thing?

Bob: It’s not. It’s continuous because there are always changes to the architecture and apps. But vFunction Continuous Modernization helps you baseline your architecture, monitors the metrics you need to track, and detects critical architectural drift. We alert you when something exceeds an expected baseline or threshold — anything that causes a spike in technical debt that needs to be controlled. Then, the architect can go back into the Modernization Hub to fix it. 

Q: Finally, what’s the ultimate lesson we can learn from the Southwest Airlines, FAA, and Twitter failures?

Bob: The fact that technical debt has worked its way into the business press and everyday conversation is not a good thing. It’s a warning to every business, and now that it’s so public, your business leaders will likely start asking how technical debt is being addressed. 

If you’re not tracking your technical debt, you will miss the warning signs. You’ll start to see slowdowns and glitches, business failures, and failure to meet business expectations. Every application owner is assuming and hoping these issues won’t snowball into a catastrophic failure down the line, but we are seeing more of these happening. 

It’s easy to understand it if you think of it in health terms — like an early sign of a heart attack is a stroke. If technical debt is truly something that can have a critical effect on your business, and you see warning signs, at least measure, monitor and prepare. You need a physical for your application estate. We are like an EKG, identifying where the problems are and their extent. You don’t want to wait until fixable issues grow into a catastrophe like they did with Southwest Airlines. Be proactive now, and you can proactively manage technical debt and control the risk so that it won’t stop the heart of your operations. 

Bob Quillin not only serves as Chief Ecosystem Officer at vFunction but works closely with customers helping enterprises accelerate their journey to the cloud faster, smarter, and at scale. His insights have helped dozens of companies successfully modernize their application architecture with a proven strategy and best practices. Learn more at vFunction.com.

Related Posts:

Red Hat and vFunction Present at Red Hat Summit 2023

Why do application modernizations fail 79% of the time? Red Hat and vFunction answered that question at Red Hat Summit 2023 and provided three recipes for success. Co-presented by Markus Nagel, Principal Technical Marketing Manager at Red Hat and Bob Quillin, Chief Ecosystem Officer at vFunction, the session is now available on-demand on the Red Hat Summit 2023 Content Hub (registration required).

Check out the full video to see how-to details behind these recipes including:

  • Recipe #1: Shift left and use observability, visibility, tooling to understand, track, and manage architectural technical debt
  • Recipe #2: Use AI-based modernization & decomposition to assess complexity, identify domains & service boundaries, eliminate dead code, build common libraries
  • Recipe #3: Leverage the strangler fig pattern to shift traffic and workload from monolith to new microservices

Specific integration patterns detailed in the session described how to use vFunction Modernization Hub to decompose and extract one or more microservices from a monolith. This involved leveraging the vFunction Red Hat Certified OpenShift Operator and using the OpenAPI definitions generated by vFunction Modernization Hub to expose the new microservices via Red Hat 3scale API Management on OpenShift.

As a foundation, these new services would also benefit from cloud-native runtimes such as Quarkus. Alternatively, Red Hat also supports Spring Boot on OpenShift, among other available options.

To execute the Strangler Fig Pattern, the session described how to use the new Red Hat Service Interconnect (based on the open source Skupper project) to connect the remaining monolith (and possibly other legacy components) with the new microservices on OpenShift.

Details of the session include:

Session Link: https://events.experiences.redhat.com/…

Session Title: Why Application Modernizations Fail (and 3 Proven Recipes for Success)

Abstract:

Why do application modernizations fail? Attempts to modernize monolithic applications into microservices —specifically business-critical Java and .NET apps we depend on every day—can be frustrating and fraught with failure.

In this virtual session, we will:

  • Identify key reasons for failures from independent industry-based surveys.
  • Explore 3 proven recipes for successful modernization with case study examples, demonstrations. and deployments to Red Hat OpenShift.
  • Explore the role of artificial intelligence (AI)-augmented architectural (technical) debt assessment, observability-driven decomposition analysis, and strangler fig pattern rollouts.
  • Architects and product owners will learn how to use the automated analytics of vFunction AI-driven assessment and analysis of monolithic applications to deploy on Red Hat OpenShift, while significantly reducing the effort and risk of the process.

The 5 Don’ts of Legacy Application Migration

Companies today depend on legacy applications for some of their most business-critical processing. In many cases, those apps still do what they were designed to do quite well. But to retain or expand their value in this age of accelerated innovation, they need to be fully integrated into today’s dominant technological environment, the cloud. That’s why legacy application migration has become a high priority for so many organizations. A recent survey reveals that 48% of companies planned to migrate at least half of their apps to the cloud within the past year.

Yet for many organizations, the ROI they’ll reap from their legacy application migration efforts will fall short of expectations. According to PricewaterhouseCoopers (PwC), “53% of companies have yet to reap substantial value from their cloud investments.” And McKinsey estimates that companies will waste approximately $100 billion on their application migration projects between 2021 and 2024.

Why Legacy Application Migration Falls Short

Why does legacy application migration so often fail to provide the expected benefits? In many cases, it’s because companies believe the quickest and easiest way to modernize their legacy apps is to move them to the cloud as-is, with no substantial changes to an app’s architecture or codebase.

But that methodology, commonly called “lift and shift,” has proven to be fundamentally inadequate for fully leveraging the benefits of the cloud. Yet companies often adopt it as the foundation for their app modernization efforts based on some widespread but fallacious beliefs about the advantages of that approach.

In this article, we want to examine some of the most pernicious lift and shift fallacies that frequently lead companies astray in their efforts to modernize their legacy app portfolios. Let’s start with an issue that’s fundamental to the inadequacy of lift and shift as a company’s primary method for moving apps to the cloud: technical debt.

The Role of Technical Debt

The greatest hindrance to a company fully benefiting from the cloud is the failure to modernize their applications . Monolithic applications carry a large amount of architectural technical debt that make integrating them into the cloud environment a complex, time-consuming, risky, and sometimes nearly impossible undertaking. And that, in turn, can negatively impact a company’s long-term marketplace success. A McKinsey report on technical debt puts it this way:

“Poor management of tech debt hamstrings companies’ ability to compete. The complications created by old and outdated systems can make integrating new products and capabilities prohibitively costly.”

But what, exactly, is technical debt? Here’s a concise yet informative definition:

“Technical debt is the cost incurred when poor design and/or implementation decisions are taken for the sake of moving fast in the short-term instead of a better approach that would take longer but preserve the efficiency, maintainability, and sanity of the codebase.”

By modern design standards, legacy apps are, almost by definition, permeated with “poor design and/or implementation decisions.” For example, such apps are typically structured as monoliths, meaning that the codebase (perhaps millions of lines of code) is a single unit with functional implementations and dependencies interwoven throughout. 

Such code can be a nightmare to maintain or upgrade since even small changes can ripple through the codebase in unexpected ways that have the potential to cause the entire app to fail.

Related: Eliminating Technical Debt: Where to Start?

Not only does technical debt make legacy code opaque (hard to understand), brittle (easy to break), and inflexible (hard to update), but it also acts as a drag on innovation. According to the McKinsey technical debt report, CIOs say they’re having to divert 10% to 20% of the budget initially allocated for new product development to dealing with technical debt. On the other hand, McKinsey also found that by effectively managing technical debt, companies can free their engineers to spend up to 50% more of their time on innovation.

The Fallacies of Lift and Shift

Because it involves little if any change to an app’s architecture or code, lift and shift normally moves apps into the cloud faster and with less engineering effort than other legacy application migration approaches. But the substantial benefits companies expect to reap from that accomplishment rarely materialize because those expectations are usually based on fallacious beliefs about the true benefits of simply migrating legacy apps to the cloud.

Let’s look at some of those fallacies.

Fallacy #1: Lift and Shift = Modernization

Companies often migrate their legacy apps to the cloud as a means, they think, of modernizing them. But in reality, simple as-is migration (which is what lift and shift is all about) has very little to do with true modernization. To see why, let’s look at a definition of application modernization from industry analyst David Weldon:

“Application modernization is the process of taking old applications and the platforms they run on and making them ‘new’ again by replacing or updating each with modern features and capabilities that better align with current business needs.”

Lift and shift migration, which by definition transfers apps to the cloud with as little change as possible, does nothing to update them “with modern features and capabilities.” If the app was an opaque, brittle, inflexible monolith in the data center, it remains exactly that, with all the disadvantages and limitations of the monolithic architecture, when lifted and shifted to the cloud. That’s why migration alone has little chance of substantially improving the agility, scalability, and cost-effectiveness of a company’s legacy apps.

True modernization involves refactoring apps from monoliths to a cloud-native microservices architecture. Only then can legacy apps reap the benefits of complete integration into the cloud ecosystem. In contrast, lift and shift migration only defers the real work of modernization to some future time.

Fallacy #2: Lift and Shift Is Faster

It’s true that lift and shift migration is usually the quickest way to get apps into the cloud. But it’s often not the quickest way of making apps productive in the cloud. That’s because cloud management of apps that were never designed for that environment, and that retain all the technical debt and other issues they had in the data center, can be a complex, time-consuming, and costly process.

The ITPro tech news site provides a good example of the kind of post-migration issues that can negate or even reverse the supposed speed advantage of lift and shift:

“Compatibility is the first issue that companies are liable to run into with lift-and-shift; particularly when dealing with legacy applications, there’s a good chance the original code relies on old, outdated software, or defunct libraries. This could make running that app in the cloud difficult, if not impossible, without modification.”

To make matters worse, the complexity and interconnectedness of monolithic codebases can make anticipating potential compatibility or dependency issues prior to migration extremely difficult.

Fallacy #3: Lift and Shift Is Easier

In the past, architects lacked the tools needed for generating the hard data required for building a business case to justify complex modernization projects. This made lift and shift migration appear to be the easiest path toward modernization.

But today’s advanced AI-based application modernization platforms provide comprehensive analysis tools that enable you to present a compelling, data-driven business case demonstrating that from both technical and business perspectives the long-term ROI of true modernization far exceeds that of simple migration.

Fallacy #4: Migration Is Cheaper

Because lift and shift migration avoids the costs associated with upgrading the code or structure of monolithic legacy apps, it seems to be the least expensive alternative. In reality, monoliths are the most expensive architecture to run in the cloud because they can’t take advantage of the elasticity and adaptability of that environment.

Related: Migrating Monolithic Applications to Microservices Architecture

Migrated monolithic apps still require the same CPU, memory, and storage resources they did in the data center, but the costs of providing those resources in the cloud may be even greater than they were on-prem. IBM puts it this way:

An application that’s only partially optimized for the cloud environment may never realize the potential savings of (the) cloud and may actually cost more to run on the cloud in the long run.

IBM also notes that because existing licenses for software running on-site may not be valid for the cloud, “licensing costs and restrictions may make lift and shift migration prohibitively expensive or even legally impossible.”

Fallacy #5: Migration Reduces Your Technical Debt

As we’ve seen, minimizing technical debt is critical for effectively modernizing legacy apps. But when apps are simply migrated to the cloud, they take all their technical debt with them and often pick up more when they arrive. For example, some migrated apps may develop debilitating cloud latency issues that weren’t a factor when the app was running on-site.

So, migration alone does nothing to reduce technical debt, and may even make it worse.

How to Truly Modernize

In a recent technical debt report, KPMG declared that “Getting a handle on it [technical debt] is mission-critical and essential for success in the modern technology-enabled business environment.”

If your company relies on legacy app processing for important aspects of your mission, it’s critical that you prioritize true modernization; that is, not just migrating your essential apps to the cloud, but refactoring them to give them full cloud-native capabilities while simultaneously eliminating or minimizing technical debt.

The first step is to conduct a comprehensive analysis of your legacy app portfolio to determine the amount and type of technical debt each app is carrying. With that data, you can then develop (and justify) a detailed modernization plan.

Here’s where an advanced modernization tool with AI-based application analysis capabilities can significantly streamline the entire process. The vFunction platform can automatically analyze the sources and extent of technical debt in your apps, and provide quantified measures of its negative impact on current operations and your ability to innovate for the future.

If you’d like to move beyond legacy application migration to true legacy app modernization, vFunction can help. Contact us today to see how it works.

Shift Left to Avoid Technical Debt Disasters

Can technical debt cause business disasters? Just ask Southwest Airlines: their technical debt caused a shutdown during the 2022 Christmas season that cost the company more than $1 billion, not to mention the goodwill of irate customers who were stranded by the collapse of the carrier’s flight and crew scheduling system.

Or you could ask Elon Musk, whose new Twitter acquisition suffered its own chaos-inducing disruption in March of 2023 due to what one employee described as “so much tech debt from Twitter 1.0 that if you make a change right now, everything breaks.”

As these examples indicate, unaddressed technical debt can indeed pitchfork a company into a sudden and disastrous disruption of its entire operation. That’s why for many companies, addressing the technical debt carried by the mission-critical software applications they depend on is at the top of their IT priorities list.

But identifying and reducing technical debt can be difficult. And that’s especially true of architectural technical debt, which is often even harder to isolate and fix.

In this article, we’ll examine the challenges of architectural technical debt, and see how continuous modernization, along with the “shift left” approach to quality assurance (which helps minimize that debt by beginning QA evaluations early in the development process) can substantially reduce a company’s vulnerability to technical debt disasters.

What is Architectural Technical Debt?

The Journal of Systems and Software describes technical debt as “sub-optimal design or implementation solutions that yield a benefit in the short term but make changes more costly or even impossible in the medium to long term.” Although the term has generally been applied to code, it also applies to architectural issues. The Carnegie Mellon Software Engineering Institute defines architectural technical debt similarly, in this way:

“Architectural technical debt is a design or construction approach that’s expedient in the short term, but that creates a technical context in which the same work requires architectural rework and costs more to do later than it would cost to do now.”

Architectural technical debt may be baked into an application’s design before coding even starts. A good example is the fact that most legacy Java apps are structured as monoliths, meaning that the codebase is organized as a single, non-modularized unit that has functional implementations and dependencies interwoven throughout.

Because the app’s components are all together in one place and communicate directly through function calls, this architecture may at first appear less complex than, for example, an architecture built around independent microservices that communicate more indirectly through APIs or protocols such as HTTPS.

Related: The Cost of Technical Debt and What It Can Mean for Your Business

But the tight coupling between functions in monolithic code imposes severe limitations on the flexibility, adaptability, and scalability of the app. Because functions are so interconnected, even a small change to a single function could have unintended consequences elsewhere in the code. That makes updating monolithic apps difficult, time-consuming, and risky since any change has the potential to cause the entire app to fail in unanticipated ways.

Challenges of Managing Architectural Technical Debt

Not only may initial design decisions insert architectural technical debt into apps up front, but changes that occur over time, through a process known as architectural technical drift, can be an even more insidious driver of technical debt.

Architectural technical drift occurs when, to meet immediate needs or perhaps because requirements have changed, developers modify the code in ways that deviate from the planned architecture. The result is that over time the codebase diverges more and more from the architectural design specification.

What makes such drift so dangerous is that while designed-in architectural debt can be identified by comparing the design specification against modern best practices, the ad hoc changes inserted along the way by developers are typically documented poorly—if at all. 

The result is that architects often have little visibility into the actual state of a legacy codebase since it no longer matches the architecture design specification. And that makes architectural technical debt very hard to identify and even harder to fix.

The problem is that while architects have a variety of tools for assessing code quality through, for example, static and dynamic analysis or measuring cyclomatic complexity (a metric that reveals how likely the code is to contain errors, and how hard it will be to test, troubleshoot, and maintain), they haven’t had comparable tools for assessing how an app’s architecture is evolving or drifting over time.

Why Measuring and Addressing Architectural Technical Debt is Critical

While code quality and complexity are key application health issues, architectural technical debt is an even higher level concern because unaddressed architectural deficiencies can make it very difficult, or even impossible, to upgrade apps to keep pace with the rapidly evolving requirements that define today’s cloud-centric technological environment.

For example, the monolithic architecture that characterizes most legacy Java codebases is notorious for having ingrained and intractable technical debt that imposes severe limitations on the maintainability, adaptability, and scalability of such apps.

But given the difficulty of detecting and measuring architectural technical debt, how can architects effectively address it to prevent it from eventually causing serious issues? As management guru Peter Drucker famously said, “You can’t improve what you don’t measure.”

The answer is by following a “shift left” QA strategy based on use of the advanced AI-based tools now available for detecting, measuring, monitoring, and remediating architectural debt and drift issues before they cause technical debt meltdowns.

Shifting Left to Address Architectural Technical Debt

In the traditional waterfall approach to software development, operational testing of apps comes near the end of the development cycle, usually as the last step before deployment. But architectural issues that come to light at that late stage are extremely difficult and costly to fix, and may significantly delay deployment of the app. The shift left approach originally aimed to alleviate that problem.

In essence, shift left moved QA toward the start of the development cycle—the technique gets its name from the fact that diagrams of the software development sequence typically place the initial phase on the left with succeeding phases added on the right. Ideally, the process begins, before any code is written, by assessing the architectural design to ensure it aligns with functional specifications and customer requirements.

Shift left is a fundamental element of Agile methodology, which emphasizes developing, testing, and delivering working software in small increments. Because the code delivered with each Agile iteration must function correctly, shift left testing allows verification of the design and performance of components such as APIs, containers, and microservices under realistic runtime conditions at each step of the development process.

In this context, shifting left for architecture gives senior engineers and architects visibility into architectural drift throughout the application lifecycle. It makes modernization a closed-loop process where architecture debt is always being proactively observed, tracked, baselined, and where anomalies are detected early enough to avoid disasters.

That’s especially beneficial for modernization efforts in which legacy apps are refactored from monoliths to a cloud-native microservices architecture. Since microservices are designed to function independently of one another, the shift left approach helps to ensure that all services integrate smoothly into the overall architecture and that any functional incompatibilities or communications issues are identified and addressed as soon as they appear.

The Importance of Continuous Modernization

One of the greatest benefits of legacy app modernization is that it substantially reduces technical debt. This is especially the case with monolithic apps—the process of refactoring them to a microservices architecture automatically eliminates most (though not necessarily all) of their technical debt.

But modernization isn’t a one-time process. Because of the rapid advances in technology and the quickly evolving competitive demands that characterize today’s business environment, from the moment an app is deployed it begins to fall behind the requirements curve and become out of date.

Plus, the urgency of those new requirements can put immense pressure on development and maintenance teams to get their products deployed as quickly as possible. That, in turn, often leads them to make “sub-optimal design or implementation solutions that yield a benefit in the short term.” And that, by definition, adds technical debt to even newly designed or modernized apps.

As a result, the technical debt of any app will inevitably increase over time. That’s not necessarily bad; it’s what you do about that accumulating debt that counts. Ward Cunningham, who coined the term “technical debt” in 1992, puts it this way:

“A little debt speeds development so long as it is paid back promptly with refactoring. The danger occurs when the debt is not repaid.”

That’s why continuous modernization is so critical. Without it, the technical debt carried by your company’s application portfolio is never repaid and will continue to increase until a business disaster of some kind becomes inevitable. As a recent Gartner report declares:

“Applications and software engineering leaders must create a continuous modernization culture. Every product or platform team must manage their technical debt, develop a modernization strategy and continuously modernize their products and platforms… Teams must ensure that they don’t fall victim to “drift” over time.”

Related: The Top 5 Reasons Technical Debt Accumulates in Your Business Applications

The Key to Continuous Modernization

Until recently, it’s been difficult for software development and engineering leaders to establish a culture of continuous modernization because they lacked the specialized tools needed for observing, tracking, and managing technical debt in general—and architectural technical debt in particular. But the recent advent of AI-based tools specially designed for that process has been a game changer. They enable software teams to identify architectural issues, understand their complexity, predict how much time and engineering effort will be required to fix them, and actually lead the team through the refactoring process.

The vFunction Continuous Modernization Manager enables architects to apply the shift left principle throughout the software development lifecycle to continuously identify, monitor, manage, and fix architectural technical debt problems. In particular, it enables users to pinpoint architectural technical drift issues and remediate them before they contribute to some future technical debt catastrophe.

If you’d like to know more about how an advanced continuous modernization tool can help your company avoid technical debt disasters, contact us today.

Getting a Handle on Architectural Debt

In March 2023, Amazon.com published an article on how it rearchitected its Prime Video offering from a distributed microservices architecture to a ‘monolithic’ architecture running within a single Amazon Elastic Container Service (ECS) stack.

Despite a reduction in infrastructure cost of over 90%, the seemingly counterintuitive move generated consternation across the cloud architecture community. Monoliths are ‘bad,’ laden with technical debt, while microservices are ‘good,’ free from such debt, they trumpeted. How could Amazon make such a contrarian move?

This controversy centers on what people mean by ‘monolith,’ and why its connotation is so negative. In general parlance, a monolith is a pattern saddled with architectural debt – debt that the organization must pay back sooner or later. Based on this definition, an organization would be crazy to move from an architecture with less debt to one with more.

But as the Amazon story shows, there is more to this story – not only a clearer idea of the true nature of architectural monoliths, but also the fundamental concept of architectural debt.

Architectural Debt: A Particular Kind of Technical Debt

As I explained in an earlier article, technical debt represents some kind of technology mess that someone has to clean up. In many cases, technical debt results from poorly written code, but more often than not, is more a result of evolving requirements that existing technology simply cannot keep up with.

Architectural debt is a special kind of technical debt that indicates expedient, poorly constructed, or obsolete architecture.

Even more so than the more familiar source code-related technical debt, architectural debt is often a necessary and desirable characteristic of the software architecture. The reason: too much software architecture in the early phases of a software project can cause systemic problems for the initiative that lead to increased costs and a greater chance of project failure.

In fact, the problem of architecture that is too much and too early, aka ‘overdesign,’ is one of the primary weaknesses of the waterfall methodology.

Instead, modern software principles call for ‘just enough’ or ‘just in time’ architecture, expecting architects to spend the minimum time on the task necessary to guide the software effort. If a future iteration calls for more or different architecture, then the architect should perform the additional work at that time.

Good vs. Bad Architectural Debt

Given such principles, you’d think that Amazon’s move to a monolith would be better received.

After all, the reason Amazon’s architects chose microservices in the first place was because such a decision was expedient and didn’t require excessive architectural work. The move to a monolith was simply a necessary rearchitecture step in a subsequent iteration.

Where the confusion arose was over the difference between this ‘good’ type of architectural debt – intentional ‘just enough, just in time’ architecture as part of an iterative design – and the ‘bad’ type: older, legacy architectures that may have served their purpose at the time, but are now obsolete, leading to increased costs and limited flexibility.

Examples of Good vs. Bad Architectural Debt

It may be difficult to distinguish between the two types of architectural debt. To help clarify the differences, here are two examples.

Example #1: Addressing good architectural debt.

An organization is implementing an application which will eventually have a global user base. The architects consider whether to architect it to support internationalization but decide to put this task off in the interests of expediency.

Eventually the development team must rework the app to support internationalization – a task that takes longer than it would have had they architected the app to support it initially.

Nevertheless, the organization was able to put the application into production more quickly than if they had taken the time to internationalize it, thus bringing in revenue sooner and giving themselves more opportunity to figure out how they should improve the application.

Example #2: Addressing bad architectural debt.

An organization struggles with the limitations of its fifteen-year-old Java EE application, running on premises on, say, Oracle WebLogic. The app is now too inflexible to meet current business needs, and the organization would like to move the functionality to the cloud – a migration that WebLogic is poorly suited for.

The organization must first take inventory of their existing architecture, requiring architectural observability that can delineate the as-is architecture of the application, how it’s behaving in production, and what its most urgent problems are. The architecture team must also establish an architectural baseline and then determine how much the as-is architecture has drifted from it.

At that point, the organization must implement a modernization strategy that considers the technical debt inherent in the internal interdependencies among architectural elements (Java classes, objects, and methods in this case). Only then can it make informed modernization decisions for the overall architecture as well as the software components that make up the application.

Architectural observability from tools like the vFunction Architectural Observability Platform is essential for understanding and thus dealing with bad architectural debt. Such debt is difficult to identify and even more difficult to fix. In some cases, fixing architectural debt isn’t worth the trouble – but without architectural observability, you’ll never know which architectural debt you should address.

The Intellyx Take

The term ‘monolith’ is saddled with all the negative connotations of bad architectural debt, but as the Amazon example illustrates, such connotation paints the term with too wide a brush.

In reality, what constitutes a monolith has changed over time. Object-oriented techniques relegated procedural programs to the status of monolith. Today, cloud native architectures apply the label to the object-oriented applications of the Java EE days.

Understanding architectural debt, therefore, goes well beyond the labels people put on their architectures. With the proper visibility, architects can differentiate between good and bad architectural debt and thus begin the difficult but often necessary process of modernization in order to get a handle on their organization’s architectural debt.

Copyright © Intellyx LLC. vFunction is an Intellyx customer. None of the other organizations mentioned in this article is an Intellyx customer. Intellyx retains final editorial control of this article. No AI was used in the production of this article.

Image credit: Atom.D.

Benefits of Adopting a Continuous Modernization Approach

In 2021, the Standish Group published a report on the effectiveness of different approaches to software modernization. The report looked at the efficacy of replacing legacy solutions with entirely new code or using existing components as a basis for modernization. The authors also identified an approach they called “Infinite Flow,” where modernization was a continuous process and not a project with a start and end date.

The Standish Group’s definition of infinite flow mimics that of continuous modernization (CM). CM is a continuous process of delivering software updates incrementally. It allows developers to replace legacy software iteratively with less user disruption. Both definitions focus on the ongoing nature of software delivery and its organizational impact.

According to their analysis, the report authors determined that continuous flow processes deliver more value than other methodologies, such as agile or waterfall. They calculated that waterfall projects are 80% overhead and only return a net value of 20%. In contrast, continuous modernization operates with 20% overhead and delivers an 80% net value. They also calculated that 60% of customers were disappointed in the software delivered at the end of a large development effort. In comparison, only 8% of customers felt the same with continuous development processes.

If continuous modernization delivers a more significant net value and increases customer satisfaction, why aren’t more organizations using the methodology as they replace legacy systems? Let’s take a closer look at this strategy to determine why more companies don’t realize the benefits of CM.

What is Continuous Modernization?

The Information Systems Audit and Control Association (ISACA) defines continuous modernization as a strategy to evolve an existing architecture continuously and incorporate emerging technologies in the core business operating model. With CM, organizations develop solutions in increments that encourage frequent releases where the software is monitored and refined, feeding back into the development cycle. 

The approach allows companies to gradually replace aging technologies that pose business risks. It enables businesses to add features and functionality to existing systems without disrupting operations. However, CM is more than a development strategy. It is a mindset.

Traditional software development is project-based. A scope of work is defined with a start and end date. It doesn’t matter if the development method is waterfall or agile. Cumulative software updates are released on a pre-defined date. After installation, bugs may be identified and fixed. Some flaws are added to the scope of work for the next release.

With CM, on the other hand, software development becomes part of a continuous improvement mindset where each iteration enhances the existing software. New software is deployed monthly, weekly, or daily. Unlike project-based development, changes are not withheld until a project scope has been completed. The steady stream of updates requires a cultural shift.

Related: Navigating the Cultural Change of Modernization

Key performance indicators (KPIs) for software development and measurement methods no longer apply. Testing procedures are automated to keep pace with incremental software releases. End users see small changes in the user interface or functionality instead of massive changes delivered simultaneously. If organizations are to realize the following benefits of CM, they need to address the cultural changes necessary to support a continuous improvement model.

What Are the Benefits of CM?

The 2021 Standish authors indicated that the flow-like modernization methodology had the following benefits:

  • Modernization using a series of microprojects had better outcomes than a single large project.
  • Microprojects achieved greater customer satisfaction because of built-in feedback loops.
  • Microprojects delivered a higher return of value.
  • Modernization using continuous improvement reduced risk and monetary loss.
  • Continuous modernization has a higher degree of sustainable innovation.
  • Continuous modernization increases application life.

Outcomes were evaluated in terms of time, budget, and customer satisfaction. In general, smaller projects in a continuous improvement model delivered better outcomes than more traditional large projects, especially in the areas of customer satisfaction, net value, and financial loss.

Increased Customer Satisfaction

Continuous modernization is less disruptive to operations. When large projects are delivered, it often results in downtime. Even if the software is installed after hours, the massive changes usually require user training. Struggling to learn the software while performing their job frustrates employees. 

Since most large projects do not solicit extensive user input during development, the updated features may not operate as users expected. Customers become disgruntled when they are told the feature operates as designed, so it isn’t a bug and won’t be addressed until the next release.

With microprojects, small changes are made incrementally with minimal impact on user operations. Employees aren’t trying to learn new functionalities while performing their job. Soliciting feedback from users as changes are deployed means modifications can be incorporated into the iterative process.

Reduced Risk

Old code is risky code. Who knows what is lurking in those undocumented modules? Depending on the age of the software, everyone associated with the original project may have left the company. Suddenly organizations are faced with a knowledge deficit. How can they support the software if no one understands the code?

Twitter is an excellent example of the impact technical debt and knowledge deficit can have on a company. Long before Elon Musk took over Twitter, employees complained that parts of the application were too brittle. Some even suggested that the technical debt was too extensive, requiring a complete rewrite. Then, Musk began his widespread staff reduction. As a result, fewer employees were available to keep brittle code functional.

In March 2023, Twitter had an operational failure. Users were unable to open links. The API that allowed the data interchange was not working. After service was restored, the source of the failure was found to be an employee error. The one engineer assigned to the API made an incorrect configuration change. Removing old code reduces the risk of a disastrous outcome from a simple configuration change. 

Reduced Technical Debt

Technical debt is no different than financial debt. At some point, it must be repaid. If it goes untouched, it only accumulates until an organization is no longer viable. A recent survey found that technical debt inhibits 70% of a company’s ability to innovate. 

CM allows developers to gradually replace legacy code that contributes to technical debt. It also keeps the debt from growing. For example, companies that release software updates once a year accumulate debt while they are writing new code. Given the exponential rate of digital adoption, the technical deficit can easily double in a year.

Following a continuous modernization approach, developers are consistently replacing older code. Because the incremental updates require less test time, new code can be delivered faster. Changes in methodology or coding standards can be incorporated into the cycling development list to minimize the amount of added technical debt.

Limited Monetary Loss

Continuous modernization incorporates user feedback into the development process. With feedback, developers can adjust the software to better reflect user needs. This process minimizes monetary loss that can result from a comprehensive software update.

Large development projects that follow the traditional management path consume significant resources before the end user sees the software. If the final product does not meet expectations, companies run the risk of bringing a product to market that lacks key features. Costs for reworking the software are added to the original expenditures. Businesses can find themselves selling solutions at a loss if the market will not support a price increase.

Related: How Opportunity Costs Can Reshape Measuring Technical Debt

With large projects, the opportunity costs can be significant if resources are tied up reworking software after delivery. Instead of pursuing an innovative solution, developers are occupied with existing development. Iterative development allows for immediate feedback so course corrections can occur early in the development process. If the product fails to meet market expectations, organizations can terminate the effort before incurring significant losses.

Sustained Innovation

Adopting a continuous improvement mindset allows developers and architects to implement a continuous modernization methodology for software development. The process enables programmers, DevOps, and engineers to deliver innovative solutions as part of their workday.

The iterative approach lets developers test innovative solutions as early in the process as possible and receive user feedback to ensure acceptance. Freed from reworking existing code and compensating for technical debt, development staff can spend more time exploring opportunities.

Limiting financial loss and reducing risk from outdated code provide businesses with added resources to investigate new markets. With a cost-effective methodology for modernization, organizations can deliver innovative solutions that consistently meet customer expectations.

Realize the Benefits of Continuous Modernization

To realize the benefits of continuous modernization, businesses must establish and measure KPIs. They must look for tools that can refactor applications and assess technical debt. 

vFunction’s Assessment Hub analyzes applications, identifies technical debt, and calculates its impact. The Modernization Hub helps architects transform monoliths into microservices. The newly released Continuous Modernization Manager lets architects shift left and address issues that could impede ongoing modernization. To see how we can help with your modernization project, request a demo today.