Category: Uncategorized

The Role of Observability in Continuous Refactoring

In today’s fast-evolving technological and marketplace environments, the software applications that companies depend on must be able to quickly adapt to the demands of an ever-changing competitive landscape. But as existing features are modified and new ones added, an app’s codebase inevitably becomes more complex, making it increasingly difficult to understand, maintain, and upgrade. How can developers ensure that their apps remain flexible and adaptable even as changes are incorporated over time? They can do so by applying the principle of continuous refactoring.

As Techopedia notes, the purpose of refactoring is to improve properties such as readability, complexity, maintainability, and extensibility. When these factors are regularly enhanced through continuous refactoring, apps can maintain their utility well into the future.

But, as management guru Peter Drucker famously said, “You can’t improve what you don’t measure.” That’s what architectural observability is all about: ensuring that developers have the ability to see and quantitatively measure the conditions they need to address through the critical process of  continuous refactoring.

Why Modernization Requires Continuous Refactoring

Application modernization can be described as the process of restructuring old apps to eliminate technical debt and seamlessly integrate them into today’s cloud-centric technological ecosystem. Technical debt is typically a major stumbling block in such integrations because it, by definition, involves “sub-optimal design or implementation solutions that yield a benefit in the short term but make changes more costly or even impossible in the medium to long term.” In other words, apps afflicted with significant amounts of technical debt can be extremely difficult to upgrade to meet new technical or competitive requirements.

The modernization process typically begins by refactoring legacy apps from a monolithic architecture (in which the codebase is organized as a single unit that has functional implementations and dependencies interwoven throughout) to a cloud-native distributed microservices architecture. But application modernization doesn’t stop there.

Because the cloud environment is constantly evolving, with technological innovations being introduced practically every day, new technical debt, and the decrease in adaptability that comes with it, begins to accumulate in an app from the moment it’s migrated to the cloud. That’s why modernization can’t be a one-and-done deal—rather, it’s a never-ending, iterative process. Martin Lavigne, R&D Lead at Trend Micro, puts it this way:

“Continuous modernization is a critical best practice to proactively manage technical debt and track architectural drift.”

Since refactoring is fundamental to the modernization process, continuous modernization necessarily involves continuous refactoring to remove technical debt and ensure that apps maintain their adaptability and utility over time.

The Impact of Architectural Drift

As Lavigne’s statement indicates, tracking and addressing architectural drift is a critical element of a successful application modernization program. That’s because architectural drift is one of the main sources of the new technical debt that migrated apps constantly accumulate as they operate and are updated in the cloud. So, what is architectural drift and why is it so important for ongoing application modernization?

Related: Benefits of Adopting a Continuous Modernization Approach

Apps are typically designed or modernized according to a coherent architectural plan. But as new requirements arise in the operating environment, quick changes are often made in an unregulated or ad hoc manner to meet immediate needs. 

As a result, the codebase and architecture begins to evolve in directions that are not consistent with the original architectural design, and technical debt, in the form of anti-patterns such as dead code, spaghetti code, class entanglements, and hidden dependencies, may grow. To make matters worse, such changes are frequently documented sparsely—if at all.

This inevitable accumulation of architectural technical debt over time is what architectural drift is all about. And it can be deadly. In an article that describes architectural technical debt as “a silent killer for business” Jason Bloomberg, Managing Partner at Intellyx, makes this comprehensive declaration:

“One of the most critical risks facing organizations today is architectural technical debt. The best way to keep such debt from building up over time is to adopt Continuous Modernization as an essential best practice. By measuring and managing architectural technical debt, software engineering teams can catch architectural drift early and target modernization efforts more precisely and efficiently.”

But that’s not an easy task. Architectural drift is difficult to identify and even harder to fix. The problem is that application architects have traditionally lacked the observability required for understanding, tracking, measuring, and managing architectural technical debt as it develops and grows over time. And, to paraphrase Peter Drucker’s maxim, you can’t improve what you can’t observe.

The Basics  of Observability 

Observability is an engineering term of art that refers to the ability to understand the internal states of a system by examining its outputs. A system or application is considered observable if its current state can be inferred based on the information provided by what are known as the three pillars of observability: event logs, metrics, and traces.

  • Event logs are records that track specific events chronologically. They are critical for characterizing the app’s behavior over time.
  • Metrics are quantitative measurements, such as CPU utilization, request response times, and error rates, that provide numerical data about the performance and health of the app.
  • Traces record the end-to-end flow of transactions through the app. They allow developers and architects to understand the interactions and dependencies between various components of the app.

Observability is crucial for the initial refactoring of monolithic legacy apps into microservices. A fundamental goal in the refactoring process is to maintain functional parity between the original application and its post-refactoring implementation in the cloud. That is, the refactored app should initially (before any updates or corrections are incorporated) function identically with the original monolith in all feasible operational scenarios.

Achieving functional parity depends on architects having deep insight into the performance and functioning of the original monolithic codebase. A high degree of observability is required to ensure that all functionalities and use cases of the original app are identified and appropriately addressed in the refactored implementation.

Related: How Continuous Modernization Can Address Architectural Drift

Once an app has been initially refactored and integrated into the cloud environment, observability becomes even more important. An app that’s been restructured to a cloud-native, microservice-based, distributed architecture is typically composed of many different components and services that, by design, function and scale independently of one another. 

Although such apps are almost uniformly easier to understand conceptually than were their monolithic precursors, they also are more topologically and operationally complex and require an even greater depth of observability for developers to fully understand how the system is functioning.

Applying Observability in Continuous Refactoring

Architectural observability is a key element of the continuous refactoring process. It allows architects to identify, monitor, and fix application architecture anomalies on an iterative basis before they grow into bigger problems. The fundamental principle governing observability in app modernization is that comprehensive monitoring must be performed throughout the refactoring process so that developers have an in-depth view of the behavior and performance of their apps at every step.

Achieving comprehensive architectural observability involves a combination of static analyses and real-time operational monitoring that enables development teams to gain deep insights into their application’s structure, behavior, and performance at every stage of refactoring. Key performance indicators (KPIs) are defined and tracked, and monitored load and stress testing is conducted to identify potential bottlenecks and scaling challenges.

Architectural drift is detected by first establishing an initial architectural baseline that describes how the app functions in normal operation. Monitoring then continues as changes are detected in the architecture, allowing developers to proactively detect and correct issues that can lead to architectural drift. The baseline is reset and the monitoring procedure repeated at each step in the continuous refactoring process.

Tools for Observability in Continuous Refactoring

Attaining a high degree of observability requires the use of appropriate monitoring tools. Such tools must provide deep domain-driven observability through sophisticated static analyses, as well as dynamic tracking of process flows and dependency interactions during actual user activity or test scenarios.

A good observability tool will be capable of baselining, monitoring, and alerting on architectural drift issues such as:

  1. Dead Code: code that is accessible but no longer used by any current user flows in production.
  2. Service Creep: services are added, deleted, or modified in ways that no longer align with the established architectural design.
  3. Common Classes: commonly used functions are not all collected into a shared class library to reduce duplicate code and dependencies.
  4. Service Exclusivity: failure to ensure that each microservice has its own defined scope and is not unnecessarily interdependent with other services.
  5. High-Debt Classes: classes that have a high degree of technical debt due to elevated levels of complexity, functional issues or bugs, and difficulties in maintainability or adaptability.

A good example of an advanced observability tool that performs these functions at a high level is the vFunction Architectural Observability Platform. This solution allows architects to manage, monitor, and fix architectural drift issues on an iterative, continuous basis. Not only does it identify and track architectural anomalies, but it notifies developers and architects of them in real-time through common alert systems such as email, Slack, or the vFunction Notifications Center.If you’d like to know more about how a state-of-the-art tool can provide the architectural observability needed to incorporate continuous refactoring into your application modernization process, we can help. Contact vFunction today to see how.

How to Prioritize Tech Debt: Strategies for Effective Management

Most companies carry technical debt, costing them 20% to 40% of their technology’s value. Over 60% of chief information officers believe their technical debt will continue to grow. Some suggest the debt adds 20% to 30% to the cost of any development project. Yet, there’s no consensus as to what equals too much debt. 

Some suggest a percentage—less than 10% or never more than 20%. Others assess debt based on its impact on velocity, innovation, cost, or system maintenance. There’s even an approach using the 80/20 rule. Applying 20% of a team’s time will address 80% of the problems – most often the low-hanging fruit. The last 20% will take 80% of their time, but for most organizations that’s the most critical component impacting their business. Deciding on what to do with the heavy 20% requires developing a strategy on how to prioritize tech debt.

How to Prioritize Tech Debt: Begin with Assessment

Prioritizing debt means quantifying it. How much debt is there? What type of debt exists? Is there legacy code? What about dead code? How badly is it affecting business velocity and innovation? Knowing the type and amount of debt is the first step in prioritization.

Assess the Technical Debt

Before setting priorities, organizations need to know their current technical debt situation. They can calculate technical debt using defect or technical debt ratios. They can evaluate code quality or the time to complete maintenance tasks. Here are five examples of how IT departments calculate debt.

  • Architectural technical debt. While this is the most difficult to to calculate, it is the most important to track as it involves the accumulation of architectural decisions and implementations that lead to high complexity of software manifested in slow engineering velocity, diminished innovation and limited scalability. 
  • Defect ratios. Software development tracks the number of new versus fixed defects. If new defects are reported faster than developers can address them, the ratio is higher, indicating a growing technical debt.
  • Technical debt ratios (TDRs). TDRs estimate the potential cost of technical debt. Organizations compare the cost of fixing a problem, such as a legacy application, versus the cost of building a new application. 
  • Code quality. This involves identifying quality metrics such as lines of code, inheritance debt, and tight couplings to quantity code quality and complexity. Coding standards can be used to help control code quality.
  • Rework. As code matures, the amount of rework should decline. If architects and engineers are redoing production or infrastructure code, it is most likely the result of technical debt.

Automated tools make the process less cumbersome; however, the solutions vary significantly. When looking at tools, make sure the solution fits the development environment, offers observability, and supports continuous modernization.

Establish a Baseline

As noted above, McKinsey found that companies pay 10% to 20% more per project to address technical debt. About 30% of chief information officers (CIOs) said that 20% of their new product development budget is used to resolve technical debt issues. Setting a baseline helps channel efforts toward sustaining an acceptable level of tech debt.

Related: Technical Debt: Who’s Responsible?

Initial assessments identify the current level of technical debt, but a baseline should establish an ongoing target. Deciding on the optimum baseline requires more than picking a number. It requires a management strategy that prioritizes debt to minimize risk, encourage innovation, and deliver efficiencies.

Set Priorities

Reducing debt is not just a technical decision. Business objectives play a role in setting priorities. While developers may focus on eliminating debt that keeps them from working on new features, executives may want to lower the risk of operational failure. At the same time, executives may focus on replacement costs rather than the resources lost to maintaining an aging system.

As IT departments evaluate how to prioritize tech debt, they must prioritize troublesome areas according to operational risk, maintenance requirements, and innovating capabilities. Breaking down code into these three groups helps identify the potential business impact of tech debt. It also simplifies the process of assigning technical priorities.

Operational Risk

Two words can summarize the importance of operational risk when setting priorities: Southwest Airlines. Despite employee warnings, the company chose to ignore its growing technical debt until the perfect storm hit during the 2022 holidays. The results were almost 17,000 canceled flights, disgruntled employees, and declining customer trust. The company estimated the outage cost them $825 million.

Legacy software also poses a security risk. Whether third-party libraries or unsupported software, old code presents security vulnerabilities. It often does not support recommended security practices such as multi-factor authentication (MFA). Known vulnerabilities can be exploited as hackers comb the internet for specific applications.

Maintenance

Inflexible codebases and complex systems increase the time needed to address customer issues. A time-out when running a report can take hours—time needed to isolate the source, understand the code, and test the fix. Faster deployment systems do not work with older code, and delivery can take another day. What should have taken four hours at the most consumes two days of a developer’s time.

Some teams allocate 25% of their workweek to addressing technical debt. They make it part of everyone’s workload. However, successful implementation requires a system to ensure the time is being used appropriately. Pressure to deliver new features or fix an “immediate” problem can easily take time away from removing technical debt.

Innovation

Inefficient tools and processes add to the time developers spend on non coding tasks. Those minutes quickly turn into hours, leaving less time for innovating new product features. Tech debt can mean infrastructure and applications that cannot support newer technology. 

With high technical debt, organizations may lack the agility to deploy the latest technology. For example, big data analytics and artificial intelligence (AI) rely on the cloud for processing power. Companies looking to implement these new technologies will want solutions that work seamlessly with the cloud. 

Define a Technical Debt Management Strategy

One of the biggest obstacles to removing technical debt is time. There’s never enough to simultaneously reduce debt, maintain current code, and develop new features. Unless there’s a clear strategy with established priorities, departments can find themselves adding to instead of removing tech debt. An effective management strategy acknowledges that removing technical debt is a continuous process. 

Adopt Continuous Modernization

Continuous modernization uses incremental improvements in an iterative process to deliver software changes. The process minimizes risk while increasing value. Projects are smaller, allowing for greater agility and faster feedback. 

With a continuous modernization model, organizations resolve technical debt in steps. By establishing business-aligned priorities, code changes can be ranked according to complexity, resource availability, and time. For example, a high-priority change is needed to protect against operational failure. However, its complexity requires significant resources. The change may rank slightly lower than expected because the resources are not available. 

While waiting for resources, team members are assigned other priority changes that require fewer resources. The process ensures that the most crucial technical debt is being addressed as quickly as possible but is not preventing other improvements from being made. When it’s time to address the high-priority fix, teams can use observability tools to see how the incremental improvements are working.

Ensure Architectural Observability 

A baseline enables architects to understand what changes are needed and how those changes will impact technical fitness. It allows developers to assess architectural drift. Without observability, teams struggle to see and pinpoint their architectural debt, fix it, and prevent future drift from impacting performance. Comparing baselines before and after modernization efforts helps identify whether previous were fixed and what new issues may need to be addressed.

Related: How Unchecked Technical Debt Can Result in Potential Business Catastrophe

Architectural observability will help architects and developers:

  • Identify domains with dynamic analysis and AI
  • Visualize class and resource dependencies and cross-domain pollution
  • Pinpoint high-debt classes
  • Improve modularity by adding common libraries
  • Identify dead code and dead flows based on production data

Observability tools can also provide data to support business-related priorities. Tracking architectural efficiencies can highlight improvements that reduce risks. Tools also provide data on the efficacy of each change. Together, the information builds a system on how to prioritize tech debt in any environment.

Assign Ownership

Visibility only has value if someone is using it. By enabling ownership for architects and their applications, organizations can ensure that someone is observing the modernization process. With automated tools, tracing architectural drift can be as simple as setting threshold values. No one needs to pour over log files or stare at screen output to ensure that technical debt is being reduced.

Continuous modernization platforms can provide automation to manage technical debt through an iterative process. They can offer system architects the visibility they need to develop a management strategy that reflects business and technical priorities. With automated tools, ownership becomes an informative process that leads to continuous improvement rather than a burdensome task to avoid.

How to Prioritize Tech Debt Effectively

While leading the removal of tech debt may be an architect’s domain, deciding on priorities is a shared responsibility. Development teams must look at operational risks, maintenance costs, and innovation limitations when setting priorities. They must also weigh resource availability and delivery schedule to decide how to best optimize modernization efforts. 

To be successful, IT departments must integrate priority-setting strategies with continuous modernization models. They need automated tools that provide the observability to ensure that tech debt is being reduced and architectural drift is contained. Automated tools enable development teams to take ownership of the modernization process. vFunction’s modernization platform enables organizations to assess and prioritize their technical debt. It helps teams manage their continuous modernization processes with observability to successfully manage architectural drift and technical debt. Contact us today to request a demo and see how the platform can work for you.

Unleashing Potential: A Deep Dive into a Strangler Fig Pattern Example

In a recent survey of corporate IT leaders, 87% of respondents said that modernizing their critical legacy apps is a top priority. When it comes to developing an effective approach to the complex and difficult task of application modernization, a great place to start is by taking an in-depth look at a Strangler Fig Pattern example and how it can help with modernization efforts. 

Companies must focus on increasing their agility in order to meet the constantly changing demands of today’s marketplace. For most, doing so will involve upgrading their business-critical legacy software applications to function effectively in today’s cloud-based technological ecosystem. 

The problem with most legacy apps is that they are extremely difficult to update, extend, and maintain because of their monolithic architecture. A monolithic codebase is organized as a single unit with functions and dependencies interwoven throughout. Because of those often-hidden dependencies, a change to any part of the codebase may have unintended and unforeseen effects elsewhere in the code, potentially causing the app to fail.

But when legacy apps are modernized and deployed using the Strangler Fig Pattern, technical debt  can be remediated more quickly, efficiently, and safely than can be achieved using more traditional approaches.

What a Strangler Fig Pattern Example Can Teach Us

The Strangler Fig Pattern is a key concept for understanding how to address technical debt and safely modernize large monolithic Java and .NET apps. But what, exactly, is the Strangler Fig Pattern?

The term was coined in 2004 by Martin Fowler. He noticed that the seeds of the strangler fig tree germinate in the upper branches of another tree. As the strangler fig tree’s roots work their way to the ground, they surround the host tree and, over time, expand so much that they strangle and eventually kill it. At that point, the strangler fig tree has, in effect, replaced the original tree.

Fowler saw this pattern as a good model for the way large monolithic apps can be safely modernized by creating a set of microservices that surround the app, replacing its functions one-by-one until the original app is entirely superseded by the framework of microservices built around it. That’s the Strangler Fig Pattern in application modernization.

Related: The Strangler Architecture Pattern for Modernization

To get a feel for how this pattern works in practice, we want to dig into a real-world Strangler Fig Pattern example that will illustrate the process and help us understand the results that can be expected. We’ll start by looking more closely at why the Strangler Fig Pattern is so crucial for the app modernization process. Then we’ll examine a case study that shows how one corporation applied the strangler fig concept in modernizing its large portfolio of business-critical legacy apps.

Why App Modernization Is So Difficult

The goal in reducing technical debt and modernizing legacy applications is to restructure them from their original stand-alone, monolithic design—a design that can integrate only partially and with great difficulty into the modern cloud ecosystem—into a more modular cloud-native microservices architecture that functions naturally in that environment.

Microservices are designed to be small, loosely coupled, self-contained, and autonomous. Each one implements a single business function and can be developed, deployed, executed, and scaled independently of the others. Because of that loose coupling between functions, a microservices-based app can be updated relatively quickly, easily, and safely.

In contrast, the very structure of the typical legacy Java app injects a high degree of complexity into the modernization process. The functions and services of a monolithic codebase are usually so tightly coupled and interdependent (and, in many cases, so inadequately documented) that unraveling execution paths and dependencies to gain a clear understanding of the code’s functionality and run-time behavior can be a complex and error-prone process. This inherent observability issue makes identifying and implementing appropriate technical remediation fixes extremely difficult, time-consuming, and risky.

The key issue in modernization is to restructure a legacy app’s architecture to give it cloud-native capabilities while ensuring that the functionality of the original app is faithfully maintained.

How the Strangler Fig Pattern Facilitates Application Modernization

Because of the difficulties an architect will typically encounter in developing a comprehensive understanding of a monolithic codebase’s behavior in all possible runtime scenarios, any attempt to completely replace or restructure a large legacy app all at once will almost certainly introduce bugs that can cause significant and often hard-to-trace operational disruptions.

The Strangler Fig Pattern allows the restructuring to be done step by step, one function at a time. At each step a single domain or microservice is implemented and fully tested before it is incorporated into the app. The testing is accomplished by running the new domain or microservice in parallel with the original app in the production environment to ensure that both always respond identically to the same inputs.

The testing process is facilitated by the use of an interface layer called a façade. All external requests to the application go through the façade. Initially, before any microservices are incorporated, the façade simply passes requests directly to the original app. But once a new microservice is implemented and verified through testing, the façade directs all requests concerning that function to the microservice rather than to the old app.

Because each domain or microservice is exhaustively tested in normal operations before it replaces the equivalent original function, there’s typically no need to ever bring the app offline to do a cutover to the new version, and the chances of unexpected disruptions due to the restructuring process are all but eliminated. Nor is there any need to maintain two different versions of the app since the original code is never changed but is simply replaced, function by function, one at a time, by microservices.

Eventually, all the legacy app’s functions are replaced by the equivalent fully tested microservices. At that point the Strangler Fig Pattern has done its job—the old app has been entirely displaced and can be retired.

Now, let’s look at a practical Strangler Fig Pattern example.

A Strangler Fig Pattern Case Study

Many large and well-known companies, such as Netflix, Google, IBM, and Microsoft, use the Strangler Fig Pattern in their application modernization efforts. A global leader in Software Security with more than a half million customers around the world and $2 billion in revenues, is also on that list.

One of their most business-critical software systems was in desperate need of upgrading. This system, which had a combined 2 million lines of code and 10,000 highly interdependent Java classes, was originally implemented on-premises. 

Parts of it were successfully migrated to the Amazon Web Services (AWS) cloud using a lift-and-shift process. This provided some improvements in compute resource usage. But because the codebase was still monolithic in its architecture, with deep interdependencies across multiple modules, the system experienced significant challenges in terms of performance, scaling, development velocity, and deployment speed.

Because their key security suite was still overwhelmingly monolithic even after it was rehosted to AWS, it was increasingly causing integration and upgrade problems, leading them to mount a major modernization effort.

The company’s modernization team elected to work with an external partner to implement a Strangler Fig approach using an advanced, state-of-the-art, AI-based modernization platform. They started by using the modernization platform to conduct static and dynamic analyses to identify complex circular or unnecessary dependencies in the monolithic code and determine appropriate service domain boundaries. They then employed iterative refactoring, using the Strangler Fig Pattern, to eliminate those dependencies and create relevant microservices.

Related: Simplify Refactoring Monoliths to Microservices with AWS and vFunction

The process of refactoring the monolith to create microservices, which would have taken more than a year if done manually, was completed in less than three months. And the time to deploy an update to AWS was reduced from nearly an entire day to one hour.

Unleashing the Potential of the Strangler Fig Pattern

The benefits that can be gained in our Strangler Fig pattern example are available to any company that’s faced with the imperative of fixing technical debt and updating their legacy apps to keep pace with the requirements of today’s ever-changing marketplace and technological environments. Although modernizing a suite of monolithic apps is a highly complex and challenging undertaking, companies can make the process far less daunting by doing three things:

  1. Use architectural observability and the Strangler Fig Pattern to iteratively refactor your monolithic code into microservices.
  2. Work with a modernization partner organization that has deep experience and expertise in transforming monolithic Java apps into a microservices implementation.
  3. Rather than performing modernization tasks manually, make use of the advanced, AI-based tools that are now available.

If you’d like to explore how implementing the Strangler Fig Pattern can boost your company’s app modernization efforts, a good place to start is where our case study customer started. After making little progress on their own for over a year, they teamed with vFunction for expert guidance and assistance in the modernization process. 

They used vFunction’s state-of-the-art, AI-based continuous modernization platform to deploy architectural observability, substantially automating essential tasks, such as performing static and dynamic analyses to identify domains and dependencies in monolithic code, determining appropriate service domain boundaries, and refactoring monolithic code functions to microservices.

To get started with unleashing the power of the Strangler Fig Pattern in your company’s technical debt and modernization efforts, contact us today to request a Strangler Fig Pattern demo.

Technical Debt – Who’s Responsible?

If, as McKinsey declares, every company is a software company, then it’s equally true that at some level, every company has a technical debt problem. As McKinsey also says, “Almost every business has some degree of tech debt” and “Poor management of tech debt hamstrings companies’ ability to compete.” With 86% of IT executives reporting that their companies were impacted by technical debt over the last year, it’s an issue that can significantly affect any business that depends on software for its internal operations or customer interactions.

And yet, although 94% of companies recognize the importance of managing their technical debt, 58% have no formal strategy for doing so. Why such neglect? With many areas of the business competing for support, the ROI of modernizing legacy apps to eliminate technical debt simply hasn’t been clear enough to make it a priority.

But that reality represents an opportunity for software architects and development teams, which have traditionally assumed a somewhat hands-off and reactive stance toward business matters, to take on a bigger role in their organization. They can do so by making a compelling business case for why managing technical debt is critical for helping the company meet its strategic objectives. In this article, we want to help make that case. Let’s start by looking at why technical debt is such an important issue.

What Is Technical Debt?

The Journal of Systems and Software defines technical debt as “sub-optimal design or implementation solutions that yield a benefit in the short term but make changes more costly or even impossible in the medium to long term.” Ward Cunningham, who coined the term in 1993 to highlight the long-term costs of taking design or implementation shortcuts to release software more quickly, describes those costs this way:

“Every minute spent on not-quite-right code counts as interest on that debt. Entire engineering organizations can be brought to a stand-still under the debt load.”

The impact of Cunningham’s insight can be seen in the fact that engineers now spend about a third of their time fixing technical debt issues, siphoning off 10%-20% of their company’s new product technology budget in the process.

Who’s Responsible for Technical Debt?

In general, there’s no single source of technical debt. It often results from the need to release software as quickly as possible. Sometimes it reflects a misalignment between business requirements and development specifications or practices. Or it may be caused by the fact that once launched into the real world, apps frequently require quick, ad hoc changes that may not align with the original architectural design.

Related: Eliminating Technical Debt: Where to Start?

But the fact that technical debt usually cannot be traced to any definite source can be a distinct advantage. It allows software architects and developers to advocate for prioritizing application modernization to minimize technical debt without provoking resistance from other stakeholders who might feel that such an emphasis points a finger of blame in their direction.

Types of Technical Debt

From an app design standpoint, there are three major types of technical debt:

  1. Code-level technical debt: This type of debt arises from shortcuts or errors inserted into the code as it is being developed or updated. It can severely limit the readability and maintainability of the codebase.
  2. Component-level technical debt: Components are logically modular units of code that should ideally be self-contained. But legacy app components are frequently tightly coupled and interdependent. That, along with any design, performance, or scalability issues, can create a significant amount of technical debt.
  3. Architectural-level technical debt: This refers to technical debt that is built into an app before coding even starts due to shortcomings in its architectural design. A good example is the monolithic architecture that typically characterizes legacy Java apps. A monolithic codebase is organized as a single unit that has functional implementations and dependencies interwoven throughout. Because any change might ripple through the codebase in unexpected ways, potentially causing the app to fail, monolithic apps can be extremely difficult to maintain and update.

Gartner describes the relationship between the three types of debt this way:

“The code-and-component-level technical debt is usually the easiest type of debt to measure and pay down. At the same time, the architectural-level debt has a much higher impact on overall product quality, feature delivery lead time and other metrics.”

Benefits of Effective Technical Debt Management

Companies benefit by effectively managing their technical debt in two ways: by avoiding damage caused by technical debt disasters, and by improvements in their ability to innovate. Let’s take a closer look.

Avoiding Technical Debt Disasters

During the holiday season of 2022, Southwest Airlines was forced to cancel almost 17,000 flights due to the failure of its outdated flight and crew scheduling system. This outage, caused by what devops.com calls the airline’s “shameful technical debt,” has so far cost the company more than $1 billion. And Southwest isn’t alone. According to the Consortium for Information and Software Quality, poor software quality is now costing U.S. companies more than $2.41 trillion.

Improving Innovation

Technical debt is the #1 obstacle to the ability of companies to create the new technologies and products that are critical for outpacing their competition in today’s rapidly evolving marketplace. Gartner estimates that by 2025 companies will spend 40% of their IT budgets on maintaining technical debt rather than on innovation. On the other hand, a report by McKinsey declares that companies that actively manage their technical debt can free up their engineers to spend up to 50% more of their time on innovations that support the organization’s business goals.

A Process for Addressing Technical Debt

Technical debt is not just an IT issue. Rather, it’s a critical concern that affects the entire business. That fact presents software architects and developers with a unique opportunity to take on a more strategic role, first by helping decision-makers understand both the risks to the organization of failing to address technical debt and the ROI of proactively doing so, and then by providing a sustainable solution.

Building a compelling case for dealing with technical debt requires a data-driven approach that highlights its impact on important business metrics. Here’s a three-step process for doing that.

1. Measure and Track Technical Debt: Architectural Observability

As management guru Peter Drucker once famously said, “You can’t improve what you don’t measure.” That’s why the first step in the process is to begin using architectural observability for continuously measuring and tracking technical debt as a key business metric.

Related: How to Measure Technical Debt for Effective App Modernization Planning

In a 2012 paper entitled, “In Search of a Metric for Managing Architectural Technical Debt” researchers described a methodology for measuring technical debt based on dependencies between architectural elements in the code. Their approach, which has become the basis for the practical use of machine learning to measure technical debt, enabled the development of an overall technical debt score based on three key metrics:

  1. Complexity — the amount of effort required to add new features to the app.
  2. Risk — the probability that adding new features may disrupt the operation of existing ones.
  3. Overall Debt — how much additional work will be required when adding new features to the app.

These metrics allow you to quantify both the risks of failing to address technical debt and the expected costs of doing so.

2. Identify and Rank Apps Impacted by Technical Debt

Most companies don’t need to address technical debt in all their apps at once. Instead, it’s best to assess technical debt across the company’s legacy app estate to identify which should be modernized and in what order. The use of an automated machine learning platform for this task is crucial since any effort to manually generate accurate technical debt metrics for perhaps thousands of legacy apps is simply not practical for most organizations.

3. Build a Business Plan for Addressing Technical Debt

For enterprise architects, the biggest obstacle to effectively managing technical debt is gathering the data to plan to build the planning, required budget and resources. Business leaders often don’t have the background to allow them to fully appreciate the technical issues associated with technical debt. But they usually are very concerned about the organization’s ability to meet its strategic business goals. That’s why it’s crucial that enterprise architects make a solid business case for prioritizing an effective, ongoing technical debt management program.

McKinsey’s technical debt report highlights what the focus of that business case should be:

“Cutting back tech debt is the key to becoming tech-forward: a company where technology is an engine for continual growth and productivity.”

Unaddressed technical debt severely hinders a company’s ability to innovate and outpace competitors in its marketplace. According to a recent report on the business costs of technical debt, developers are wasting between 23% and 42% of their time because of technical debt. That contrasts with McKinley’s declaration that companies that handle their technical debt can give their engineers 50% more time to spend on solutions that help the organization achieve its strategic goals. Gartner adds that companies that effectively manage their technical debt can deliver services and solutions at least 50% faster.

Building a Data-Driven Enterprise Tech Debt Plan

Most business leaders look for hard data to drive their decisions. That’s why it’s critical that you base your technical debt business case on objective metrics. How can those metrics be produced?

The quickest and most accurate means of generating that data is by using an advanced machine-learning assessment platform such as the one provided by vFunction. The vFunction Assessment Hub is specifically designed to deliver relevant and accurate technical debt metrics. Not only does it measure the complexity and risk level of your current legacy app portfolio, but it also quantifies the benefits to be gained by refactoring the apps with the greatest technical debt burden into a cloud-native microservices architecture.

If you’d like to see first-hand how vFunction can help you modernize your legacy apps and eliminate technical debt, please schedule a demo.

App Modernization Strategies for Cost Reduction and Optimization

For a growing number of companies today, app modernization is a high priority. They’re attempting to update their IT infrastructure and reduce costs by moving software applications out of their on-premises data centers and into the cloud. According to CloudZero, more than two-thirds already have some or all of their IT estate in the cloud, with 39% running at least half of their workloads there.

But for many, this effort hasn’t worked out as they hoped. According to a report from Fortinet entitled, “The Bi-Directional Cloud Highway,” 74% of companies have migrated apps to the cloud but then moved them back again. Some of those return trips were planned, but many constituted an implicit admission that the initial transfer to the cloud failed to produce the expected results.

So, what went wrong?

In many cases, companies were disappointed because they didn’t obtain the financial savings they anticipated. CloudZero notes that six out of ten survey respondents report that their cloud costs are higher than expected, while 53% say they have yet to see any substantial ROI from their cloud investment. Respondents in another survey estimate that their organizations have wasted 32% of the funds they’ve spent on the cloud.

But it doesn’t have to be that way. Companies that develop and execute a well-targeted strategic plan for their cloud efforts can reap significant savings by modernizing their legacy apps to give them cloud-native capabilities.

In this article, we want to identify some of the most significant features of such a strategy.

How App Modernization Helps Reduce Costs

In the typical company today, engineers spend about 33% of their time dealing with technical debt. That term refers to the amount of unplanned work an IT organization must devote to supporting apps that, due to their outdated design or implementation, have become extremely difficult to maintain or adapt to meet new requirements. Continually investing scarce resources into keeping such apps running is a common but costly practice.

Related: How Much Does it Cost to Maintain Legacy Software Systems?

On the other hand, modernizing legacy applications to give them cloud-native capabilities can produce significant savings in and of itself. Intel quotes a recent study as declaring that when companies reduce their technical debt load by modernizing their legacy app portfolio, they realize immediate savings that amount, on average, to 32% of their IT budget. And according to IBM, companies that implement an effective app modernization program can expect benefits such as:

  • 15% – 35% year-over-year infrastructure savings
  • 30% – 50% lower app maintenance and operational costs
  • 74% lower costs for hardware, software, and staff
  • 14% increase in annual revenue

Getting App Modernization Right

Many companies fail to reap the expected benefits from their app modernization efforts because they confuse modernization with simple migration. They assume that by simply migrating their legacy apps from a data center environment to the cloud, without making any substantial changes to the app design or implementation, they achieve a significant degree of modernization. That’s not the case.

Legacy apps typically are monolithic in structure, meaning that the codebase is a single unit that has functional implementations and dependencies interwoven throughout. The very design of such apps creates a high level of technical debt because a change to any function can have unexpected effects elsewhere in the codebase, potentially causing the app to fail. Because of that inherent technical debt, monolithic apps are by nature very difficult to maintain and update.

When a monolithic app is transferred as-is to the cloud (a process called “lift and shift”) it carries its technical debt with it: all the factors that made the app difficult to maintain and adapt in the data center continue to do so in the cloud. And although it still needs the same CPU, memory, and storage resources it did in the data center, a monolithic app cannot efficiently access those resources in the cloud. All this can have a huge negative impact on cloud costs. In an article on the lift and shift methodology, IBM puts it this way:

“An application that’s only partially optimized for the cloud environment may never realize the potential savings of cloud and may actually cost more to run on the cloud in the long run.”

In reality, a monolith is the most expensive type of app to run in the cloud. It’s only when monolithic apps are truly modernized, by refactoring them to a cloud-native microservices architecture, that the full benefits of the cloud are obtained.

Reducing Cloud Costs

Once legacy apps have been refactored to have cloud-native capabilities, further steps can be taken to reduce cloud costs even more. Cloud cost efficiency is built on the fact that cloud-native services are inherently elastic, scalable, and adaptive. You can minimize your costs by fine-tuning your cloud operation to take full advantage of these characteristics. Here are some ways to do that:

Reduce Operational Costs

Because of the cloud’s superior elasticity, cloud-native resources, including newly modernized legacy apps, can scale instantly and automatically based on demand. That allows you to rightsize your compute resources to fit your utilization requirements. Here are some ways to do that:

  1. Attribute your costs. To make sound decisions regarding forecasts, budgets, and cost optimization, you must understand how your IT costs are allocated across your organization. Identifying which functional areas contribute most to your overall cost structure allows you to sharply focus your cost reduction efforts.
  2. Inventory your compute resource needs. This enables you to determine the instance size and type you need for each workload based on historic usage patterns, and select the lowest-cost options that meet your requirements.
  3. Monitor your cloud resource utilization. Avoid over-provisioning by continuously monitoring your cloud resource usage patterns to identify utilization trends (your cloud provider probably offers tools for this). This will help you to rightsize your resource commitments based on your actual cloud workloads.

Related: Application Modernization – 3 Common Pitfalls to Avoid

  1. Implement autoscaling. Incorporate mechanisms into your workloads to automatically adjust the number of instances or resources you use based on current demand.
  2. Consider serverless computing. Serverless computing platforms, such as AWS Lambda, Google Cloud Functions, or Azure Functions, relieve you of the necessity of provisioning and managing virtual servers and allow you to pay only for the execution time you consume.
  3. Use cloud-native services. The cloud offers many managed services that often can outperform services implemented in your data center, and do so at a lower cost. For example, using a cloud-native database service such as AWS DynamoDB or Azure Cosmos DB is often far more cost-effective than migrating your on-prem DB solution to run in the cloud.

Reduce Licensing Costs

Licensing costs are an often overlooked but potentially huge element of your overall cloud expenses. AWS makes the point very clearly:

“Without optimizing your licensing in cloud migration, the cost of overprovisioning third-party licensing can exceed the cost of compute.”

And in its lift and shift article IBM adds that your existing data center software licenses may not be valid for the cloud:

“Licensing costs and restrictions may make lift and shift migration prohibitively expensive or even legally impossible.”

Here are some steps you can take to reduce your licensing costs:

  1. Inventory your current licenses to determine if any are underutilized, no longer needed, or redundant. Check with vendors and cloud providers to see if it’s possible to transfer your existing licenses to your cloud environment.
  2. Proactively negotiate cloud licensing agreements with vendors based on current usage patterns and your assessment of future needs in the cloud.
  3. Consider alternatives such as open-source software or subscription-based services that offer a pay-as-you-go model for cost-effective scaling. Explore whether you can use managed cloud-native services that have licensing costs already built in.

Reduce Project Costs

App modernization allows you to:

  1. Increase staff efficiency required for maintaining and upgrading your legacy apps and reducing their technical debt. Once apps have been restructured into a microservice architecture they’re far easier to understand and adapt. That means that fewer people are required for managing them than were needed before modernization.
  2. Shorten release cycles by adopting a continuous integration, continuous deployment (CI/CD) modernization methodology. By breaking monolithic apps into independent microservices, you allow much of your development and maintenance work to be done in parallel since each microservice is assigned to its own team.
  3. Drive increased agility and innovation by leveraging existing cloud-native resources (rather than building from scratch) to minimize the work your developers must perform to create the new apps and features that can propel your company forward in its marketplace.

Setting the Stage for App Modernization

Application modernization can provide substantial cost reductions for your organization’s IT operation. But it’s important to note that significant savings can only be achieved by making extensive use of automation in the modernization process.

The process of analyzing a company’s portfolio of monolithic legacy apps (which may have tens of millions of lines of code and thousands of classes) to untangle hidden dependencies and reveal service boundaries, and then refactoring those apps into microservices, is a highly complex and labor-intensive endeavor. Any attempt to accomplish it using manual methods would be prohibitively expensive in terms of time, personnel, and financial resources.

What’s needed instead is an AI-based automated analysis tool that can produce comprehensive static and dynamic analyses of your legacy apps far more quickly than human engineers could. That kind of accurate, detailed information about the current state of your legacy app estate can then serve as the basis for building an effective modernization plan.

The vFunction application modernization platform can quickly and automatically analyze your apps to assess dependencies, technical debt, service boundaries, and other important modernization parameters. It can substantially automate the process of refactoring a monolithic codebase into microservices, providing your team with significant savings in time, personnel, and money.

To experience first-hand how vFunction can streamline your application modernization efforts and help you to substantially reduce your IT costs, request a demo today.

Application Modernization Trends: Goals, Challenges, and Insights

Application modernization continues to gain traction. According to Foundry’s State of The CIO Study 2023, modernizing applications and infrastructures remains the third-highest initiative for Chief Information Officers (CIOs). It is also among the top five factors driving IT investment dollars in 2023. In fact, 91% of CIOs expect their budgets to increase or remain the same. The funds are needed to address application modernization trends.

Although organizations have made progress in modernizing legacy systems, they still have work to do if they want to achieve the following top five business initiatives:

  • Improve operational efficiency
  • Increase cybersecurity defenses
  • Transform business processes
  • Enhance the customer experience
  • Increase profitability

The ongoing focus on modernization indicates that Kubernetes (K8s) and cloud platforms alone have not solved the problems of large legacy monoliths that cannot be easily lifted and shifted. In these cases, application modernization will require refactoring or rearchitecting.

Modernization is at the core of 2023’s number two priority—cybersecurity. Legacy systems present a significant risk. Not only are they unable to defend against modern attack vectors, but they contain old vulnerabilities that were never fixed. Cybercriminals actively scan potential targets for legacy systems that have unpatched vulnerabilities.

At the same time, outdated systems and monolithic architecture hinder business operations and user experience. Older technologies do not play well with advanced solutions. Transforming operations for improved efficiencies is the top priority for 45% of CIOs in 2023. In the current economic environment, more efficient processes are important for lowering expenses and protecting profitability.

Cloud migration plays a significant role in application modernization. While the cloud is not a prerequisite to modernization, many companies have made it part of their cloud strategy. Exactly how they combine to create a strategy depends on the organization.

Application Modernization Trends and the Legacy Dilemma

For most businesses, existing applications are still vital to business processes. They often support core functionalities and host essential data. Most organizations still use legacy systems because they are crucial to business operations. Dismantling such systems and building new ones would destabilize or disrupt business processes. 

Related: What is Application Modernization? The Ultimate Guide

Monolithic applications technologies, infrastructure, and architecture are more rigid than newer microservices architectures. The older technologies limit the IT teams’ ability to develop new features quickly and efficiently. Some legacy systems are already obsolete, making replacing them challenging or impossible. In such cases, the only alternative is modernizing applications.

How Companies View the Legacy Dilemma

In many ways, companies view legacy systems “as the devil they know.” They are usually an integral part of business operations, and the magnitude of changing out a core system is unfathomable. As long as the system functions, they are reluctant to risk disruption.

For many organizations, the solution resides in the cloud. If lifting and shifting monolithic applications to the cloud adds to the life of a legacy system, many companies are willing to integrate old code into cloud-based platforms. However, the strategy is not without challenges.

Addressing Lift and Shift Challenges

Old and new technologies do not merge seamlessly. It often requires APIs or middleware to allow the systems to coexist. Once operational, the systems may lack performance capabilities. These are just a few of the challenges of rehosting a legacy application in the cloud.

Incompatibility

It may be possible to lift and shift applications to the cloud, but some apps are not compatible. Identifying these specific apps helps determine how to handle them before the move. Rehosting applications in the cloud can also lead to performance and latency issues. Applications that depend on third-party software are also often unsuitable for the lift and shift method.

Inefficiencies

While rehosting may move a legacy application to the cloud faster, it may take longer to optimize the older technology. Some apps may also be unable to leverage cloud computing resources. Since legacy applications are not cloud-native, it may be challenging to run them efficiently. Other application modernization methods, such as refactoring or rearchitecting, can deliver a more cloud-native application.

Cost

Moving a legacy application to the cloud with minimal changes may appear to be the least expensive and lowest-risk option. However, the long-term costs could be immeasurable. Without a cloud-native environment, organizations may struggle to deliver competitive products, resulting in lower market share and few customers.

Even though the legacy application is operating in the cloud, it cannot take advantage of all cloud capabilities. Critical visibility may not be available, making it more difficult for IT to troubleshoot the application or defend against cyberattacks. When deciding how to best modernize applications, businesses need to evaluate both long- and short-term factors.

Security Issues

Cloud security depends on the individuals implementing it. On-premise security best practices are not the same as in the cloud. Organizations looking at their first cloud application often lack the expertise to secure a cloud environment. Finding the talent to fill that gap is a challenge.

Staffing shortages in the tech field continue. The US Bureau of Labor Statistics predicts that the need for cybersecurity personnel will increase by 35% between 2021 and 2031. Job openings for software developers will increase by 25% during the same ten years. Overcoming the challenges of finding and retaining the necessary talent is a formidable task to ensure a secure cloud environment.

Shifting Priorities 

A recent survey on the future of the cloud found that organizations that view moving to the cloud as a strategic part of their digital transformation achieved higher levels of innovation than their less strategic counterparts. The survey highlighted the value of maximizing cloud services. For example, those companies with cloud services that support advanced technologies such as artificial intelligence are 1.7 times more likely to receive increased value than businesses with a less mature infrastructure.

However, cloud-based transformation requires modernization. According to IBM, modernization amplifies the value of the cloud as much as 13 times if it is part of an end-to-end transformation. Even though 83% of executives agree that modernizing applications and data is critical to their business strategies, only 27% have modernized their workflows. 

As priorities shift, organizations are re-evaluating their modernization strategies. Aligning business, modernization, and cloud strategies enables companies to optimize their cloud services to utilize application modernization trends.

Creating a Cloud Strategy for Application Modernization 

Every business strategy should include a cloud strategy. Companies adopting a “cloud-first” policy need a plan for onboarding new and modernizing old workloads. As they look to develop strategies, businesses should consider implementing policies such as the following:

Modernizing Data

Gartner analysts predict that by 2025, at least 85% of companies will adopt the cloud-first principle. However, it won’t be easy to implement their digital strategies without cloud-native technologies. This rings true since the majority of enterprise workloads are not cloud-ready

Related: Q&A Series: The 3 Layers of an Application: Which Layer Should I Modernize First?

So how do workloads become cloud-ready? Modernizing data is about replacing legacy databases to be able to handle distributed and streaming data sources and sinks. In order to modernize the data layer, modernization experts recommend starting first with the business logic layer.

Migrating to a New Architecture

Another application modernization trend is embracing new architectures. Instead of shifting a legacy application to the cloud in its entirety, you can move some of its features to more efficient architectures. This enables faster development.

When modernizing any application architecture, leveraging architectural observability tooling is essential. This will pinpoint architectural hotspots and drift issues. Addressing these problems incrementally while moving to new architectures will solve such issues. It also addresses security, scalability, and reliability concerns and helps resolve issues with tolerance, capacity, and redundancy.

Turning Monolith into Microservices

Monolithic applications have a single large codebase. In contrast, microservices applications operate independently. Every feature or application handles one service. This transformation to microservices improves the development and deployment of updates and new features. Technology stacks become more flexible. Also, there’s minimal risk of downstream effects that comes with changes in the underlying code.

Moving to the Cloud

The cloud revolutionized digital experiences with innovations such as mobile payment. Clearly, most legacy applications need cloud modernization. Cloud-native platforms allow developers to leverage the principles and tools of the cloud environment. It becomes possible to deploy new digital workloads to cloud-native platforms.

Going Hybrid

In some cases, fully modernizing for the cloud is unnecessary. Depending on business goals and budgets, organizations can incorporate public, private, and hybrid clouds. For instance, if an application experiences usage spikes, a public cloud can minimize the spikes. It can scale appropriately to accommodate the spikes at lower costs. However, if there’s little or no financial gain from a complete migration, a hybrid cloud is another option. 

Incorporating Trends

Unless modernization is part of a cloud strategy, organizations will fail to realize its full value. Shifting legacy code to the cloud doesn’t provide the agility or resilience required in today’s competitive environment. Without application modernization, companies cannot address the 2023 trends impacting digital transformation.

How 2023 Trends Impact Application Modernization

Not all trends are positive. Ongoing labor shortages and cost-based decisions will hamper modernization efforts. Disruptive technologies will add pressure for cloud-native capabilities, and a lack of cultural change will allow technical debt to accumulate. These are just a few of the trends companies must address as they look to the future.

Finding Tech Talent

IBM’s study found that 45% of companies consider a lack of expertise as an obstacle to modernization. With less than 10% of employees having cloud or modernization experience, organizations need to look beyond new hires to acquire the expertise. Executives say financial constraints are the primary reason they lack experienced employees. 

  1. Recruiting talent is expensive. Despite recent staff reductions in the tech sector, finding people to fill open positions can still take four to six months. That assumes CIOs can find them. Gartner found that 86% of companies have encountered more competition for candidates in 2023. Stiff competition means higher wages at a time when money is tight, and inflation paints an uncertain economic outlook. 
  2. Retaining staff is critical. Garnter’s survey found 73% of CIOs worry about staff attrition. As demand continues to outpace supply, headhunters are looking to entice employees to change employers. Companies need to invest in their technical staff if they want to retain them.

Providing growth opportunities not only improves a business’s technology capabilities but also increases employee retention. Unfortunately, 43% of organizations cite budget constraints as the reason they fail to offer skills development. Another 38% say they are too busy to lose time to training, and 32% would rather hire new talent. 

Related: Why Organizations Are Adding App Modernization to CCOE

Deciding whether to recruit or retain depends on an organization’s skills gap. Rather than default to a set strategy, CIOs need to determine what in-house capabilities exist with a little upskilling and what expertise needs to be hired. CIOs should also consider modernization tools that can reduce the time individuals spend on low-value tasks.

Understanding Disruptive Technologies 

Knowing how disruptive technologies will impact business growth begins with modernization. New technologies such as artificial intelligence (AI), the Internet of Things (IoT), and virtualization all require modern applications operating in a cloud-native environment. Legacy systems will be too far removed to fit comfortably with emerging technology.

Artificial Intelligence

Generative AI uses AI to produce content. It acquires and synthesizes data to compose responses. For example, ChatGPT offers AI-powered chatbots that understand natural language, retain context, and deliver the most probable outcome. While generative AI is in its infancy, imagine how personalized customer experiences could be. Online shoppers could finally receive answers to questions such as 

  • Will this chair go with the rest of the room?
  • Which appliance is the best choice for my needs?
  • What goes with this shirt?

Answers to these questions can quickly dispel barriers to online purchases. However, organizations will need a modern infrastructure to take advantage of generative AI.

Internet of Things (IoT)

From drones to sensors, more devices are being deployed every day. Each device collects data that, when totaled, results in millions, even billions, of data points. Processing massive amounts of information requires cloud-based resources. It demands modernized applications that can turn data into valuable insights. 

When an agricultural enterprise invests thousands in IoT devices, it needs applications that can take advantage of cloud computing capabilities. Deploying atmospheric sensors across acres of farmland helps farmers know when conditions are right for planting and harvesting. Having the right foundation ensures the results will be comprehensive and timely.

Controlling Technical Debt

Organizations continue to collect technical debt. According to McKinsey, they are stuck in a vicious cycle where IT struggles to keep up with requirements—expediency rules how solutions are implemented. The landscape grows more complex with each less-than-optimum deployment.

Most companies are aware that technical debt is killing modernization efforts. What they may not realize is that 40% of IT is technical debt. For every project, companies pay an additional 10% to 20% to address technical debt. Among CIOs, 30% believe at least 20% of their new product budget is consumed by technical debt.

McKinsey’s research found that reducing technical debt has far-reaching impacts. Engineers could spend as much as 50% more time working on value-oriented products. They would spend less time addressing system complexities. Uptime would improve, and resiliency would become a reality. To move forward, businesses need to control their technical debt.

Reducing technical debt isn’t just an IT problem. It’s a cultural problem where expectations focus on fast and low-cost solutions. No matter the intentions, if the culture is more concerned with immediate results than long-term viability, technical debt will continue to accumulate. Without an application modernization plan, accumulated debt will weaken an organization, making it impossible to remain competitive.

Future Proofing the Enterprise

McKinsey recommends that organizations make budget allocations to control technical debt a strategic decision. It’s not just flagging funds for modernization. It’s managing those funds separately, creating an environment of accountability and transparency. Executives must incorporate modernization into their strategic plan and develop monitoring processes to hold everyone accountable.

For example, the accounting department desperately needs a fix and hounds IT for delivery. IT can cludge something together, but the solution only adds to its technical debt. IT could deliver a quick fix and then provide a solution that eliminates the associated debt. However, delivering the follow-up solution means the sales department will need to wait another two weeks for their update.

Traditional approaches would have IT deliver the quick fix and complete the sales update on time. The accumulating debt would be IT’s problem to fix while juggling the myriad of high-priority projects. In many cases, the correction never happens.

Under McKinsey’s system, the decision would be strategic. It would mean balancing the short-term gain against future modernization. It would require executives to back the appropriate strategic decision regardless of the immediate impact. 

Looking Beyond Cost

Although the majority of executives understand the toll technical debt inflicts on their businesses, they still consider cost as the primary factor when looking at application modernization. To future-proof their organizations, executives need to evaluate the opportunity costs as part of the cost analysis. What future capabilities will be lost if modernization doesn’t happen?

Moving technical debt considerations to the boardroom changes how application modernization happens. If a strategic objective is to use generative AI to improve customer experience, modernizing becomes part of the critical path. Updating older technology is woven into the business strategy to ensure that the use of generative AI happens. 

Identifying IT’s skill gaps allows companies to assess where to place their human resource dollars. It also enables businesses to find automated solutions that can free staff from time-consuming, repetitive work. The more comprehensive the talent pool, the better an enterprise can navigate the future.

Navigating the Future

vFunction’s solution helps organizations future-proof their applications. Its platform helps turn Java or .NET monolithic structures into microservices. Using AI-powered technology, the product provides IT departments with the ability to control architectural drift in a continuous modernization environment. Request a demo or watch the video to learn more about future-proofing your enterprise.

How Continuous Modernization Can Address Architectural Drift

As more organizations implement a shift-left approach to software development, architects are looking for ways to become part of a collaborative team. They can no longer deliver a design to development and walk away. With a continuous modernization approach, friction between what was planned and what was implemented disappears as teams work together to address architectural changes as early in the process as possible. 

Originally, the shift-left movement focused on security. Its goal was to create systems where security was part of the design rather than added later in the development process. The shift required software architects to consider security measures in their initial design. It meant testing earlier and addressing design limitations while development was just beginning.

The changing mindset added pressure on engineers to maintain visibility into an application’s architecture. Evolving security requirements often demanded changes in design. That created a problem. How do you change a design if you don’t know what the design is doing in production? Even more critical is how you control design changes in a continuous integration/continuous development (CI/CD) environment. Can continuous modernization help?

What is Continuous Modernization?

Continuous modernization not only extends the CI/CD process, but more importantly, it enables organizations to incrementally modernize software to minimize technical debt and architectural drift. It gives companies a path for improving security as architectural vulnerabilities appear. Unlike waterfall approaches, architecture updates are provided throughout the SDLC process, not deferred to future releases or never.

However, all software suffers from growing technical debt. Changes are based on expediency rather than design integrity. If not controlled, an application can deviate from its original infrastructure, making it difficult to locate and fix flaws. Understanding architectural drift is imperative to help teams leverage continuous modernization to minimize architectural erosion.

What is Architectural Drift?

Software evolves—sometimes by design, but often in response to business demands. Users want a new feature. The application needs better performance. Of course, delivery schedules are tight, requiring trade-offs. These decisions often result in technical debt and architectural drift.

Architectural drift results from the unchecked evolution of runtime software that leads to a lack of coherence and clarity in the software’s design. Dead code, class entanglements, and deep dependencies contribute to Brian Marick’s “big ball of mud” that prevents architects from observing how systems work in live environments. 

Related: Getting Leadership Buy-in on a Continuous Application Modernization Strategy

Unless engineers can see the architecture in operation, they cannot determine how far the software has drifted from its original design. They’ve lost control of the ship, and it’s drifting in open waters.

How Does Architectural Drift Become a Problem?

When ships drift, they go where the ocean takes them. Left unchecked, they go aground or succumb to the elements. The same can be said of architectural drift. Without correction, a system flounders. Its agility falters, and its viability fails. Like a ship, it succumbs to its environment.

Start with the Design

Architectural drift can begin before a developer writes a line of code. Designs that use tightly coupled structures with layered dependencies allow developers to rely on the infrastructure to maintain control. Function calls disappear into a maze that mysteriously delivers a result — almost like magic. If an error occurs, architects have few resources to help identify where the problem resides.

Even with distributed architectures, engineers can struggle. Microservices deployed across an application throw an error. How do architects determine if the error is isolated to a single instance? How do they determine what triggers the error? Without observability, resolution becomes time-consuming.

Add Changes Over Time

Not every software change adds to an application’s architectural technical debt. However, those that do pose a problem for engineers. During development, design changes may try to follow best practices for identifying deviations from the original architecture specification. But shift happens.Whether requirements change or expediency calls, architectural erosion is the result. Modifications are made that alter the original design. If left unchecked, these changes accumulate and increase the architectural drift of an application.

Mix in a Lack of Visibility

While visibility tools abound for applications, these same tools are not available at the architecture level. Without tools to analyze, track and correct architectural erosion, architects can’t adequately define how far the design has drifted. Even with better tools, engineers need observability capabilities.

Unlike monitoring, observability takes a proactive look at the internal state of the software during runtime. Its goal is to identify critical anomalies in a system’s architecture. To be effective, observability must be consistent, holistic, and automated. But what exactly is observability?

What is Observability? 

Observability tries to describe the internal state of software through external outputs. Observability typically uses three data sources known as the three pillars of observability. 

  • Logs. Record what happens within an application, including its infrastructure.
  • Metrics. Defined data points used to flag unusual behavior.
  • Traces. Provide visibility of step-by-step code execution.

Events are often considered a fourth pillar. These customized records highlight potential problems through pattern identification,

While the data sources provide useful information, they have their limitations. Using observability tools that combine the information into comprehensive views delivers a realistic picture of system operations. Unfortunately, not every system component has the same level of visibility tools.

Why is Architectural Observability the Answer?

System architects have worked with “big balls of mud” for decades. They have struggled to untangle threads and assess problems through indirect means. The difficulty with architectural observability is poor tool creation.

Systems Are Complex

Monolithic structures have given way to distributed architectures that include microservices and containers. Sustained visibility across a distributed system often requires multiple tools that deliver data in varying formats. What’s missing is data consolidation that delivers a holistic view.

Data is Complex

Sorting through volumes of data recorded in real-time presents a challenge. Even with automated tools, data management can become time-consuming. If the data is not persisted, timely extraction may be needed for an accurate view over time. These factors complicate tool creation. Data consistency is crucial to identifying drift.

Related: Shift Left to Avoid Technical Debt Disasters: The Need for Continuous Modernization

A further complication to consistency is data separation. In collaborative environments, having access to all pertinent data may not be an issue; however, in situations where data silos exist, incomplete information makes a comprehensive evaluation impossible.

Business is Complex

Tying architectural events to business outcomes isn’t easy. Without an understanding of business complexities, architects may focus on the wrong metrics and fail to collect crucial data for analysis. For example, engineers may place a high priority on determining why CPU usage increases when a set of microservices runs. Executives may consider increasing page load times as more significant because slower load times can translate into lost revenue for an eCommerce site.

Observability allows engineers to see how released software deviates from its original design. It requires the right tools and a plan to address architectural drift.

How to Address Architectural Drift

Observability needs tools to establish a baseline and set thresholds. Best practices should proactively detect and correct abnormal behaviors that lead to architectural drift. The planned outcome should deliver a process that is consistent, holistic, and automated.

#1: Establish a Baseline

Baselines establish a starting point. They should include service topologies that itemize common and core business services. They should identify critical components that are routinely audited to detect deviations from the baseline. Automating the process allows architects to track those ad-hoc changes that impact an application’s infrastructure.

#2: Identify Service Exclusivity

As part of baselining, measure service exclusivity. Knowing how many independent classes and service resources are in use highlights dependencies that increase architectural debt. This baselining can help identify possible debt early before it becomes a paralyzing problem.

#3: Set Thresholds

Architects can establish thresholds for proactive observations of a system’s architecture. Automated systems enable engineers to schedule observations, configure measurements, and start analyses. Automating the collection of key metrics expedites the evaluation process for faster resolution of pending issues.

#4: Automate the Process

Automating data collection is only the first step in delivering comprehensive observability. Automation must turn that data into valuable insights that enable architects to minimize architectural erosion. The landscape is too complex and changes too rapidly for manual processing.

Continuous Modernization and Architectural Drift

Architects must be proactive in a continuous modernization environment. They must shift left to be more engaged in the initial design, whether refactoring, rearchitecting, or starting new. Their job persists through an application’s lifecycle because they have the tools needed to observe and correct architectural drift.

vFunction’s Continuous Modernization Manager provides architects with the tools needed to overcome observability challenges. Its automated modernization solution provides a holistic approach that delivers insights based on consistent data. The manager allows architects to:

  • Shift left into the development cycle
  • Monitor, detect and identify architecture drift
  • Set baseline and thresholds
  • Send alerts when critical 

vFunction enables engineers to remain proactive through an application’s lifecycle. It helps maintain the architectural integrity of the software as it is continuously modernized. To see how we can help with your application modernization needs, request a demo.

Q&A Series: Building a Business Case for Application Modernization

How to get buy-in and budget for successful application modernization

Bob Quillin, chief ecosystem officer at vFunction, is an industry expert when it comes to application modernization. He often finds that the biggest hurdle to application modernization is developing a compelling business case to take on such a complicated task that can be costly and frequently fraught with risk. Business leaders need justification for budget allocation, yet most architects lack data to prove it’s essential or determine the resources needed to pull it off successfully.

A business case must be backed with data — data that is easy to understand and see the bigger picture. Business leaders don’t often want to be told something has to be done, preferring to be shown why it needs to be done and what is likely to happen if it isn’t. This is precisely what Bob and his team at vFunction do with their Assessment Hub and Assessment Hub Express tools. These solutions were built specifically for architects who want a simplified way to build a data-driven application modernization plan and need to create a strong business case to do so.

In this interview with Bob, we discuss the key inhibitors to successful modernization projects and how to develop a rock-solid business case for application modernization. He will also discuss how the vFunction Assessment Hub works and the benefits it brings for gaining rapid visibility into the health of the entire application estate.

Q: Tell me why building a business case for application modernization is so difficult.

Bob: One of the key inhibitors to modernization projects being successful is that it’s hard to build a business case to get them approved and off the ground. Traditionally, architects haven’t had a clear understanding of what exactly needs to be done, how long it will take, or how complex it will be, all critical components of a business case. But now, we can provide the science and data to build the case.

Q: What happens without a business case?

Bob: Oftentimes, nothing. Modernization projects are either delayed, never start, or end in failure. If you aren’t looking inside and analyzing the application architecture, you can’t accurately predict the value of modernization. Without the business case, you can’t have a successful modernization project and vice versa. 

In our 2022 study with Wakefield Research of 250 technology professionals, we found, “Failure to Accurately Set Expectations” was the number one reason given by respondents who started modernization projects they didn’t complete. Areas of particular concern include unrealistic expectations relating to budget and schedule requirements and anticipated project results such as improvements in engineering velocity and application innovation.”

With vFunction’s suite of application modernization solutions, architects and senior engineers can understand the technical debt in each app, pull that out and fix the prob, modernize it, and continually monitor and fix new issues to prevent technical debt from accumulating again.

Q: How do architects know they have a technical debt problem?

Bob: From a qualitative level, application leaders have a strong sense that they are carrying a heavy load of technical debt by the symptoms they go through every time they add a new feature. How long does it take? If it’s taking your team more and more time each sprint, you know you have an issue. It can also become harder to add new features because it’s more difficult to figure out where to add the new feature and integrate it. It will also become much more difficult and time-consuming to test. One small change in a monolith requires you to test the entire application because you don’t know the downstream implications. 

With monoliths, there is a high degree of dependencies, so release cycles expand, engineering velocity decreases, and eventually, your ability to compete and add new features slows. You’ll often see a backlog of feature requests you have in your project management and tracking systems that you can’t keep up with. It significantly hampers the Dev team’s capacity and production. 

Q: How can all of this lead to increased costs?

Bob: If you have a spike in demand (requiring more CPU and memory resources) or it’s an important application, it becomes difficult to scale a monolithic application without buying bigger machines or cloud instances types or shapes. On the flip side, cloud-native architectures are more horizontally scalable with greater elasticity. The two factors I hear architects complain about are that they can’t scale and costs go up. 

There are costs to run the app, even after a lift and shift. If you break down that monolithic application into microservices, you can be more efficient in how you apply the wider variety of cloud instances to that particular need. Release velocity increases, testing speed cycles increase, and there is elasticity and scalability, all at a lower cost. These are all reasons to break down monolithic apps into microservices.

Q: If there is such a need and so much to gain, why is it so hard to get an application modernization project off the ground?

Bob: We surveyed 250 application teams and looked at the top reasons for failure. The number one reason was a failure to set expectations for leaders and architects accurately. At a minimum, they need to understand what application modernization will solve in terms of technical debt, how long it will take, and what it will cost. They need an ROI — what they will get in terms of reducing technical debt and increasing innovation.

Q: Why is this information so elusive?

Bob: Currently, the only information available is mostly qualitative. In other words, they just use their experience and best guesses, bring in consultants or a system integrator, or outsource the whole thing. It isn’t based on any science, automation, or best practices. When they don’t have data to measure architectural technical debt, they can’t assess the complexity of the app or the risks of changing it. They need observability to understand dependencies, dead code, and what’s common and not common code — all the things that make up the architecture. Without it, it’s nearly impossible to make a plan on how to rearchitect an architecture you don’t understand. A classic business mantra is if you can’t measure, you can’t improve it. When you can measure it, you can decide how to improve it, what to fix, how long it will take, how complex it will be, and what the cost will be. 

Q: vFunction directly addresses these challenges with the vFunction Assessment Hub. Is there anything else out there like it?

Bob: There are other tools that analyze source code to report back how it is written, any number of code “smells” or poor software engineering practices, and cyclomatic complexity that tracks the number of linearly-independent paths through the application. Source code analysis is different from architectural analysis, which looks at how an app is built and constructed versus how it is written. It’s easier to track little source code errors along the way versus fixing the architecture itself, but you never truly modernize the application if you don’t address the underlying root cause of technical debt. 

I like to think of it like a house. When you’re updating a kitchen or adding a bathroom to a massive house, you have to figure out the architectural components before you can tie new plumbing into the old. All of the plumbing is interdependent, so if you make a mistake with one piece of plumbing, it can impact the entire plumbing system. The monolithic application is the house. Think how much easier it is if you had the opportunity to break up a mansion into individual casitas, or microservices in this analogy. Adding on or fixing plumbing issues is now much more manageable, with fewer dependencies to worry about.

Security is another issue. If you add a piece of open-source code that has a known vulnerability, you can scan that library or code prior to adopting that component. People call source code tracking “checking for code smells,” which means looking for different errors or anti-patterns that developers have added along the way that can be detected and fixed. Security analysis tools will pick up security issues. Static analysis tools pick up code smells. At vFunction, we actually use these in our own software development process, but what’s missing typically for development teams are measurement and tracking tools for architectural technical debt 

Q: What is an example of an architectural issue?

Bob: Dead code is a good example of an architectural issue. You can’t analyze it just by looking at the source code. We define dead as code that is reachable but no longer used. We have found over the years that there are large swaths of obsolete or “zombie” code hidden in most monoliths. Something could call it, but nothing does. Maybe the service is now obsolete. It’s just sitting out there and not being used. 

Architectures drift over time, new features get added or maybe replaced, and older features are no longer used by customers and have been replaced. You’re carrying that technical debt forward. No one wants to touch it because maybe they weren’t there when it was written and don’t know what to do with it, or they fear if they touch it, there will be negative downstream effects.

Q: How is innovation impacted by technical debt?

Bob: Modernization requires funding, people resources, and time, diverting resources from other priorities, so you will have to build a business case and get approval. The question is, do you want to keep doing what you’re doing and add more and more features and ignore technical debt, or finally reduce that debt and start investing the savings in innovating for the future? 

Over time, there is a tipping point where all that technical debt weighs down the application and the organization to the point of breaking. The calculus here is that every dollar spent on technical debt is a dollar you aren’t spending on innovation. If you want to innovate more, you have to reduce your technical debt to get the ROI from modernization. 

Q: How does vFunction help increase innovation?

Bob: vFunction will measure and help you manage architectural technical debt and, then highlight the upside if you reduce it — how much ROI you’ll have in terms of innovation. This translates directly to dollars. In fact, this is one of the first factors we look at: how much architectural debt are you carrying, and how is it impacting your ability to innovate? Instead of theories and “gut feels,” we can give you numbers — here’s your ROI and TCO. Now, you have a business case that clearly illustrates to decision-makers that “if we want to increase business velocity, customer satisfaction, and innovation, this is how we have to apply our resources to bring down technical debt.”

Q: Tell me more about vFunction Assessment Hub and how it gathers and presents the data.

Bob: Our Assessment Hub analyzes technical debt based on two factors: complexity and risk. Then, those are synthesized into a technical debt score. 

Complexity is based on the degree of class entanglements within your application. It measures the density of the dependencies and how complex the application will be to modernize. 

Risk is based on the length of the dependency chains in the application. We measure the dependency chains, and how those interrelate downstream so you know that if you make one change here, what are the consequences down the line? This is the bane of the monolith — if you make one change, you have to test the whole thing. With microservices, you have a high degree of exclusivity of the resources you use, and they are constrained within the boundaries of the microservice. The risk of making a change is much lower. 

Q: How does that technical debt score inform decisions?

Bob: We set the technical debt score per application, and you can compare it with other applications. We also show how much effort it will take to fix it in terms of time and people. Architects can use that as a way to say, “Here is our technical debt and what it’s costing us. If we reduce the tech debt, here’s the innovation that occurs.” 

The Assessment Hub scores the top 10 technical debt classes and presents a prioritized list of where to start. For example, if you fix only these 10 things, we can say what effect that will have — the ROI. This kind of insight helps people understand not only a top-level debt score, but the components of complexity and risk, and then where to start. We also analyze the architecture to identify aging platforms and frameworks you will want to update. 

Q: What comes next once you understand the scope and magnitude of the modernization project?

Bob: Ideally, you would then jump into the vFunction Modernization Hub to do it. We’ve given you the path, now go at it with an AI-enabled approach.

The vFunction Assessment Hub Express is designed to be fast. You download and run it yourself from our website. We use this as part of our own analysis to help customers get started on modernization. It gives them a snapshot of what it will take. They can then say, “This will be a complex project or wow, this isn’t that hard, and we can do 100-200 classes ourselves.” Sometimes they don’t need full-blown modernization, or they can just lift and shift because they aren’t carrying much debt anyway. You have to make sure there are clear business reasons to modernize. 

Q: So, modernization isn’t always necessary? How do you know?

Bob: For an application to warrant modernizing, it needs to be an application that is actively used and critical to the business. If there’s a large backlog of features to add or requests to fix it, you know there’s a strong demand to extend or improve the application that isn’t being met. But if there’s no business reason to extend, there’s no business IP or competitive value, or if it can be easily replaced by a modern SaaS alternative, refactoring or rearchitecting may not be the best path.

Modernization is complicated, so you have to make sure there is a viable business reason to modernize. Only the business can understand and prioritize if it’s something they want and need to do.

Q: We assume modernization is to cloud-enable a legacy application. Is this true?

Bob: Partly. We are looking at more besides improving the architecture. One of the greatest motivators besides velocity, scalability and elasticity is reducing costs and increasing efficiency. When people move to the cloud, they’re also looking to lower infrastructure spend. Cloud services can be less expensive if you architect your application to use them more efficiently. 

But it’s also about reducing licensing costs. Legacy licensing for databases and Java itself is expensive. If you move an application to the cloud, like an enterprise monolithic application you’re still carrying significant licensing costs. Most customers want to reduce licensing costs across the board. So, the cost of running an expensive monolithic application and the related licensing costs are also common motivators to modernize. 

Q: Have vFunction users reduced costs this way?

Bob: Yes. If you look at our Trend Micro study, they took a monolith they lifted and shifted, modernized to microservices, and reduced their cloud instance spend by 50%. 

Legacy applications that are lifted and shifted to the cloud require some of the most expensive services in the cloud. If you’ve just taken an older app to the cloud, it’s running with high CPU and memory requirements, plus the most expensive data layer services as well. A lift and shift application has not been optimized for the cloud. In addition, a lift and shift doesn’t reduce licensing costs, combined with high infrastructure costs. Unless it’s cloud-native, you can’t take advantage of the efficiency of the cloud. Vertical scaling is very expensive. You will get more horizontal scalability and elasticity with microservices, and it is much more cost-effective.

Q: Last question. You mentioned the importance of presenting the data the Assessment Hub gives in a way that’s easy to understand. Can you explain how the Hub does that?

Bob: The Assessment Hub is graphical. You see a visualization of complexity, risk, debt (with a score), components of tech debt, and the number of aging frameworks. Then, you can analyze TCO, the benefits of fixing the identified debt, and the resulting increase in innovation. 

We present this in different ways on a dashboard. For example, there is a pie chart view of innovation versus technical debt. It graphically represents how much you’re spending on innovation versus technical debt, with percentages for added detail. You can ask, “What are the benefits if I fix this technical debt, and how much will my TCO improve?” You can also download this information as a shareable pdf.

If you’re not ready to modernize, you can let Assessment Hub run over time to monitor trends. Our latest feature is a multiple-application dashboard. It provides compelling observability across multiple apps at the same time to visualize technical debt for a large application estate so you can compare and prioritize. 

You can scope the project to know what you’re getting into and if it’s worth it. If an architect doesn’t have the data, they can’t have a viable, believable business plan. The goal is to get people thinking about this as early as possible.

Bob Quillin not only serves as Chief Ecosystem Officer at vFunction but works closely with customers helping enterprises accelerate their journey to the cloud faster, smarter, and at scale. His insights have helped dozens of companies successfully modernize their application architecture with a proven strategy and best practices. Learn more at vFunction.com. 

Related Posts:

Q&A Series: The 3 Layers of an Application: Which Layer Should I modernize first?

How to avoid mistakes when modernizing applications

As Chief Ecosystem Officer at vFunction, Bob Quillin is considered an expert in the topic of application modernization, specifically, modernizing monolithic Java applications into microservices and building a business case to do so. In his role at vFunction, inevitably, he is asked the question, “Where do I start?”

Modernizing can be a massive undertaking that consumes resources and takes years, if it’s ever done at all. Unfortunately, because of its scale, many organizations postpone the effort, only deciding to tackle it when there is a catastrophic system failure. Those who do dive into the deep waters of modernization frequently approach it from the wrong perspective and without the proper tools.

Where to start with modernizing applications boils down to which part of the application needs attention first. There are three layers to an application: The base layer is the database layer, the middle layer is the business logic layer, and the top layer is the UI layer. 

In this interview with Bob, we discuss the challenges facing software architects and how approaching modernization by tackling the wrong layers first inevitably leads to failure, either in the short term or the long term.

Q: What do you see as the most common challenge enterprises face when deciding to modernize?

Bob: Most organizations recognize they have legacy monolithic applications that they need to modernize, but it’s not as easy as simply lifting the application and shifting it to the cloud. Applications are complicated, and their components are interconnected. Architects don’t know where to start. You have to be able to observe the application itself, how the monolithic application is constructed, and what is the best way to modernize it. Unfortunately, there isn’t a blueprint with clear steps, so the architect is going in blind. They’re looking for help in any form – clear best practices, tooling, and advice. 

Q: With a 3-tier application, you’d think there are 3 ways to approach modernization, but you say this is where application teams often go wrong.

Bob: Many technology leaders want to do the easiest thing first, which is to modernize the user interface because it has the most visual impact on their boss or customers. If not the UI, they frequently go for the database where they store the data perhaps to reduce licensing costs or storage requirements. But the business logic layer is where business services reside and where the most competitive advantage and intellectual property are embedded. It isn’t the easiest layer to begin with, but by doing so, you make the rest of your modernization efforts much easier and more lasting.

Q: What’s the problem starting with the UI layer?

Bob: When you start with the UI, you actually haven’t addressed modernization at all. Modernization is designed to help you increase your engineering velocity, reduce costs, and optimize the application for the cloud. A new UI can have short term, visual benefits but does little to target the underlying problem – and when you do refactor that application, you’ll likely have to rewrite the UI again! Our recommendation is to start with the business logic layer — this is where you’ll find the services that have specific business value to be extracted. This allows you to directly solve the issue of architectural technical debt that is dragging your business down. 

Q: What’s the value of extracting these services from the monolith?

Bob: In the past, everything was thrown together in one large monolithic “ball of mud.” The modernization goal is to break that ball of mud apart into smaller, more manageable microservices in the business logic layer so that you can achieve the benefits of the cloud and  then focus on micro front-ends and data stores associated with each service. By breaking down the monolith into microservices, you can modernize the pieces you need to, and at that point, upgrading the UI and database becomes much easier.

Q: Tell me more about the database layer and the pitfalls of starting there.

Bob: The database layer should only be decomposed once as it often stores the crown jewels of the organization and should be handled carefully. It also a very expensive part of the monolith, mostly because of the licensing, so it often seems like a good place to start to cut costs. But decomposing the database is virtually impossible to do without understanding how the business logic is using it. What are the business logic domains that use the database? Each microservice should have its own data store, so you need the microservice architecture designed first. You can’t put the cart before the horse. 

Data structures are sensitive. You’re storing a lot of business information in the database. It’s the lifeblood of the business. You only want to change that once, so change it after decomposing your business logic into services that access independent parts of the database. If you don’t do the business logic layer first, you’ll just have to decompose the database again later. 

Q: Explain how breaking down monoliths in the business logic layer into microservices works with the database layer.

Bob: Every microservice should have its own database and set of tables or data services, so if you change one microservice, you don’t have to test or impact another. If you decompose the business logic with the database in mind, you can create five different microservices that have five different data stores, for example. This sequencing makes more sense and prevents having to cycle on the database more than once. 

Also, clearly, you want to organize your access to the database according to the business logic needs versus the opposite. One thing we find when people lift and shift to the cloud, their data store is typically using the most expensive services that are available from cloud providers. The data layer is very expensive, especially if you don’t break down the business logic first. If you modernize first, you can have more economical data layer services from the get-go. If you start decomposing your business logic first, you have more efficient and optimized data services that save you money and are more cloud-native, fitting into a model going forward that gives you the cloud benefits you’re looking for. Go to business logic first, and it unlocks the opportunities. 

Q: What’s the problem with starting modernization with whatever layer feels the most logical?

Bob: Modernization is littered with shortcuts and ways to avoid dealing with the hardest part, which is refactoring, breaking up and decomposing business logic. UI projects put a shiny front on top of an older app. If that’s a need for the business, that’s fine, but in the end, you still have a monolith with the same issues. It just now looks a little better. 

A similar approach is taking the whole application and lifting and shifting it to the cloud. Sure, you’ve reduced data center costs by moving it to the cloud, but you’re delaying the inevitable. You just moved from one data center (your own) to a cloud data center (like AWS). It’s still a monolith with issues that only get bigger and cause more damage later. 

Q: How does vFunction help with this?

Bob: Until vFunction, architects didn’t have the right tools. They couldn’t see the problem so they couldn’t fix it. vFunction enables organizations to do the hard part first, starting with getting visibility and observability into the architecture to see how it’s operating and where the architectural technical debt is, then measuring it regularly. Software architects need that visibility. If we can make it easier, faster, and data-driven, it’s a much more efficient path so that you don’t have to do it again and again. 

Q: How do you focus on the business logic with vFunction? 

Bob: If you’re going to build microservices, you need to understand what key business services are inside a monolith; you need a way to begin to pull those out and clearly identify them, establish their boundaries, and set up coherent APIs. That’s really what vFunction does. It looks for clusters of activities that represent business domains and essential services. You can begin to detangle and unpack these services, seeing the services that are providing key value streams for the business that are worth modernizing. 

You can pull each out as a separate microservice to then run it more efficiently in the cloud, scale it, and pick the right cloud instances that conform to it. You can use all of the elasticity available in containers, Kubernetes, and serverless architectures through the cloud. You can then split up a database to represent just that part of the data domain the microservice needs, decomposing the database based on that microservice. 

Q: Visibility is key here, right?

Bob: Yes. The difficulty is having visibility inside the monolithic application, and since you can’t see inside it or track technical debt, you have no idea what’s going on or how much technical debt is in there. The first step is to have the tools to observe and measure that technical debt and understand the profile, baseline it, and track the architectural patterns and drift over time. 

Q: How does technical debt accumulate, and what can architects do about it?

Bob: You may see an application that was constructed in a way that maybe wasn’t perfect, but it was viable, and over time it erodes and gathers more and more architectural technical debt. There are now more business layers on top of it, more code that’s copied, and new architects come in. There are a lot of permutations that happen, and that monolith becomes untenable in its ability to fulfill changing requirements, updates, and maintenance. Monoliths are very brittle. Southwest Airlines and Twitter know this all too well.

But this is where vFunction comes in to help you understand where that architectual technical debt is. You can use our Continuous Modernization Manager and Assessment Hub to provide visibility and tracking, and then our Modernization Hub helps you pull apart and identify the business domains and services.

Q: What infrastructure and platforms support the business logic?

Bob: Application servers run the business logic. Typically, we find Oracle WebLogic, IBM WebSphere, Red Hat JBoss, and many others. Monoliths are thus dependent on these legacy technology platforms because the business logic is managed by these application server technologies. This means that both the app server and database are based on older and more expensive systems that have older licensed technology written for another architecture or domain 10-20 years ago. 

Q: What are the key benefits of looking at the business logic layer first?

Bob: By starting with the key factors that compose your architecture including the classes, resources, and dependencies, you start to deintirfy the key sources of architectural technical debt that need to be fixed. Within this new architecture, you want to create high levels of exclusivity, meaning that you want these components that contain and depend on the resource that are exclusive to each microservice. The primary goal is to architect highly independent of each other. 

Q: And what does that mean for the developer?

Bob: For the developer, it increases engineering velocity. 

In a monolith, if I want to change one thing, I have to test everything because I don’t know the dependencies. With independent microservices, I can make quick changes and turns, testing cycles go down, and I can make faster, more regular releases because my test coverage is much smaller and my cycles are much faster. 

Microservices are smaller and easier to deal with, requiring smaller teams and a smaller focus. You can respond faster to customer feature requests. As a developer, you have much more freedom to make changes and move to a more Agile development environment. You can start using more DevOps approaches, where you’re shifting left all of the testing, operational and security work into that service because everything is now much more contained and managed. 

Q: What does it mean from an operational perspective?

Bob: From an operational perspective, if the application is architected with microservices, you have more scalability in case there’s a spike in demand. With microservices and container technology, you can scale horizontally and add more capacity. With a monolith, if I do that, I might only have a certain amount of headroom, and I can’t buy a bigger machine. With memory and CPU limits, I can’t scale any further. I may have to start replicating that machine somewhere else. By moving to microservices, I have more headroom to operate and meet customer demand. 

So, developers get higher velocity, it’s easier to test features, there’s more independence, and operationally, they get more scalability and resilience in the business. These benefits aren’t available with a monolith. 

Q: This sounds like it requires a cultural shift to get organizations thinking differently about modernization.

Bob: Definitely. From a cultural perspective, you can start to adopt more modern practices and more DevOps technologies like CI/CD for continuous integration and continuous delivery. You’re then working in a modern world versus a world that was 20-30 years ago. 

As you start moving monoliths to microservices, we hear all the time that engineering morale goes up, and retention and recruiting are easier. It’s frustrating for engineers to have a backlog of feature requests you can’t respond to because you have a long test cycle. The business gets frustrated, and engineers get frustrated, which leads to burnout. Modernizing puts you in a better position to meet business demands and, honestly, have more fun. 

Q: Are all monoliths bad?

Bob: No, not all monoliths are bad. When you start decomposing a monolith that results in many microservices and teams, you should have a more efficient, scalable, higher-velocity organization, but you also have more complexity. While you’ve traded one set of complexities for another you are getting extensive benefits from the cloud. With the monolith, you couldn’t make changes easily, but now, with microservices, it’s much easier to make changes since you are dealing with fewer interdependencies  While the application may be more efficient it may not be as predictable as it was before given its new native elasticity. 

As with any new technology, this evolution requires new skillsets, training, and making sure your organization is prepared with the relevant cloud experience with container technologies and DevOps methodologies, for instance. Most of our customers already have applications on the cloud already and have developed a modern skillset to support that. But, with every new architecture comes a new set of challenges. 

Modernization needs to be done for the right reasons and requires a technical and cultural commitment as a company to be ready for that. If you haven’t made those changes or aren’t ready to make those changes, then it’s probably too soon to go through a modernization exercise. 

Q: What is the difference between an architect trying to modernize on their own versus using a toolset like vFunction offers? 

Bob: Right now, architects are running blind when it comes to understanding the current state of their monolithic architectures. There are deep levels of dependencies with long dependency chains, making it challenging to understand how one change affects another and thus how to untangle these issues. 

Most tools today look at code quality through static analysis, not architectural technical debt. This is why we say vFunction can help architects shift left back into the software development lifecycle. We provide observability into their architecture which is critical because architectural complexity is the biggest predictor of how difficult it will be to modernize your application and how long it will take. If you can’t understand and measure the architectural complexity of an application, you won’t be able to modernize it. 

Q: Is vFunction the first of its kind in terms of the toolset it provides architects?

Bob: Yes. We have built a set of visibility, observability, and modernization tools based on science, data, and measurement to give architects an understanding of what’s truly happening inside their applications. 

We also provide guidance and automation to identify where the opportunities are to decompose the monolith into microservices, with clear boundaries between those microservices. We offer consistent API calls and a “what if” mode — an interactive, safe sandbox environment where architects can make changes, rollback those changes, and share with other architects for greater collaboration, even with globally dispersed teams. 

vFunction provides the tooling, measurement, and environment so architect and developers have a proactive model that prevents future monoliths from forming. We create an iterative best practice and organizational strategy so you can detect, fix, and prevent technical debt from happening in the future. Architects can finally understand architectural technical debt, prevent architectural drift, and efficiently move their monoliths into microservices. 

Bob Quillin not only serves as Chief Ecosystem Officer at vFunction but works closely with customers helping enterprises accelerate their journey to the cloud faster, smarter, and at scale. His insights have helped dozens of companies successfully modernize their application architecture with a proven strategy and best practices. Learn more at vFunction.com.

Related Posts:

Technical Debt Risk: Review SWA, the FAA and Twitter Outages

How all organizations can learn to spot the warning signs

Until recently, “technical debt” was a term reserved mostly for those in IT, specifically, architects, developers, app owners and IT leaders. Thanks to a few high-profile outages at Southwest Airlines, the FAA, and Twitter, technical debt has made it to mainstream media outlets which are reporting on how unchecked technical debt contributed to failures that impacted millions of people and to some, cost billions and immeasurable damage to their brand reputation. 

While these organizations likely wish their hardships weren’t blasted to the public, perhaps the spotlight will serve as a warning to the thousands of other organizations that could share similar fates if they don’t act soon to address their technical debt. As more organizations shift applications to the cloud to enhance their capabilities, the problem will only increase. 

In this Q&A with Bob Quillin, the chief ecosystem officer at vFunction, we take a deep dive into how technical debt happens, the risks of ignoring it, and how it can be efficiently managed before it leads to major issues.

Q: Can you give me a little background on each of these system failures? Let’s start with Southwest Airlines.

Bob: Southwest Airlines has actually had two failures recently. The most recent issue was a firewall failure. Even the vice president said they never know when a failure is going to happen, and fixes have been slow. This is the definition of technical debt risk.

The first outage impacted tens of thousands of travelers during the peak holiday season. At first glance, you might think it was just an unfortunate coincidence, but technical debt typically is most dangerous when there is stress on the infrastructure, so the timing of this crash wasn’t random.

Over the last few years, Southwest has been called out for its outdated systems that need upgrading. How they interact with crew members and guests is very manual and phone-based. Even the pilots and crew have been saying the systems are antiquated. Most major airlines have fully modernized their business processes, whereas Southwest has not. They knew they had technical debt, but they weren’t addressing it. This scenario is typical of most technical debt issues we see in the marketplace. You keep kicking the can down the road and crossing your fingers. 

When you start seeing technical debt being used in both financial and mainstream press as reasons for high-profile business outages, it raises the visibility of the business impact, where the IT and engineering teams aren’t the only ones talking about it. When it causes a billion-dollar outage that impacts millions of people, it’s more obvious even to business people outside of IT. It can affect application availability, firewalls, data security, and more. When one card falls, others fall too, and you never know when it’s going to happen or how many systems it will impact.

Q: What about the FAA?

Bob: The FAA failure was an issue around a damaged database file and is a good example of an aging app infrastructure. With an older monolithic architecture like the FAA has, a single issue in one location has a ripple effect all the way down, cascading to a greater issue. Had they broken down their monoliths into microservices, they would have had a more distributed architecture with greater survivability, so one outage wouldn’t cause others to shut down the system. 

The FAA knew they had an outdated application that needed to be modernized, but it was risky to change. Everyone is adding more features and trying to patch it here and there, so one problem causes so many others. 

Q: Is there a way to reduce that risk?

Bob: You have to directly measure and manage technical debt to try to understand the risk — what are the dependency chains, the downstream effects? To stay in front of that you need a technical debt analysis strategy to track architectural drift and monitor how components are dependent and interrelated. Then you can begin isolating where problems occur, and the blast area is smaller. A best practice is if there is a problem, you are able to isolate it to minimize the cascading effect. Southwest Airlines couldn’t handle the scale, but the FAA had one small problem that cascaded into a bigger issue. It’s why so many organizations are moving to a cloud-native architecture.

Q: Let’s talk about Twitter. It had less of a catastrophic impact, but it was at a minimum, an inconvenience for users.

Bob: The Twitter outage was attributed to a coding mistake. There was a lot of public discussion within the engineering teams sharing that the application has grown dramatically over the years, and it’s slow and hard to change. They traded velocity over performance, spending a lot of time trying to add more capabilities without fixing the technical debt. We see this mistake across many companies.

Twitter is now trying to make more structural changes, replacing old features with new ones, and realizing the code can’t change as quickly as the new management wants. They are trying to ramp up engineering velocity, but the applications weren’t built for that. 

With a cloud-native architecture, they could add those features more quickly with more agility, but the technical debt they’ve accumulated over the years makes it harder to make changes. They’ve taken on too much technical debt to adopt new features quickly, and the application has just become too brittle. Unfortunately, you can’t take a monolith and turn it into a cloud app magically.

Q: These are examples of technical debt risk at large organizations. Does technical debt apply to smaller companies as well?

Bob: Most definitely. If you take a look at the types of organizations we’ve just discussed, we have a 40-year-old major airline, a government entity that’s slower to modernize but has mission-critical applications, then a newer cloud-unicorn company that you’d think is technically advanced. All three share issues around technical debt that are formed for different reasons that caused high-profile issues that transcend from a technical problem to a business problem. 

What typically happens is that technical debt is only discussed inside of engineering and only surfaces when something catastrophic happens. But, all three examples are very visible, and they occur on a smaller scale at probably every company. 

Q: How can a company know they have a technical debt problem?

Bob: Technical debt issues cause many familiar symptoms, like a feature that didn’t come out on time, or you lost a key customer or lost out to a competitor, all of which are often related to your inability to respond quickly due to slow engineering velocity that’s dragged down by technical debt. You can see it occurring at a micro level that’s less visible than a total system crash. You lose a deal, a customer, or market share one drip at a time. All of those things can be because technical debt slows your ability to innovate and keep up with opportunities.

On the flip side, look at what happened to Zoom. Zoom took the pandemic as an opportunity and was able to race ahead of competitors. No one anticipated everyone going virtual. They had the agility to make those changes quickly because they were cloud-native. Other businesses were slower to respond.

What happens when pandemic-effect is over? Can you respond to the next opportunity? All those windows are built upon engineering velocity driving business agility. There is nothing worse for a CTO, senior engineer, or app owner than to have to explain to their CEO or CFO that the company can’t innovate and win because it doesn’t have engineering agility.

Q: So how do organizations typically approach the lack of engineering velocity or business agility?

Bob: Usually, they debate whether they should hire more people or less expensive resources or outsource it. They ignore technical debt and bolt on more and more features to keep trying to move faster. The problem with monoliths is there’s only so fast you can move. Having more people doesn’t always mean you can move faster. You can’t hire enough people or buy big enough machines to keep up. 

The only way to increase velocity to innovate faster is to rearchitect the product. With a monolithic architecture, you have fixed costs in terms of hardware and software infrastructure that are cost-prohibitive. We have one customer that couldn’t buy a bigger machine because it didn’t exist. Their only option was to break up the monolith into microservices to scale up. They could then afford to add resources where it helped the business, but they had more efficiency and applied the dollars they had to infrastructure licensing needs.

Q: Are budgets a significant component here?

Bob: The problem is that companies aren’t addressing technical debt because they don’t want to dedicate the resources for it – time, people, and money. They either need to add more resources or dedicate the time to fix it. Unfortunately, your resource budget isn’t likely to go up and will probably be reduced. So what do you do? 

You can just let things go and keep adding more stuff to it to make it work at the expense of fixing the debt. That works out fine until the rules change. For example, Elon comes in and says we’re going to get rid of this and add this, and then engineers say they can’t make those changes that are required to change the business model that way.

Q: So, there is a cost to carrying technical debt?

Bob: Absolutely. That’s where business planning comes in. You have to look at what technical debt is costing and build a business case to show there is ROI to modernize. How do you break out of this deadly cycle, where technical debt is going up, and innovation is going down? It requires a frank conversation. Before vFunction, there was nothing to build that business case so you could have the conversation.

Q: How does vFunction help build that business case for reducing technical debt risk?

Bob: Our goal is focused on using science and data to analyze your app, determine the most effective way to modernize it, and help you put together a business case. We tell you where to modernize, the reasons and risks, and the upside — you’re spending this percentage of your IT budget on technical debt and on innovation. We can provide those insights in just six months. 

Businesses of all sizes have to have the data, analysis and ability to understand what architectural changes they need to make to get that velocity and avoid outages that others are seeing. More importantly, you get the business velocity you need to get into a win-win situation — minimizing catastrophic events and creating a greater velocity.

Q: In the past, it was hard to quantify innovation, but vFunction can do that?

Bob: Yes. Our software puts numbers on what innovation means. Innovation is a goal, but what does your feature backlog look like in terms of features and new capabilities you want to add to your application? How much is that growing over time, and are those features working? 

If you can increase your feature velocity, that will give you a dollar amount on the other side. Will it add $1M to your bottom line? You can build a business case on feature velocity. You can also understand how much an outage would cost, or if you already have one, how fast you can make bug fixes. There is a cost to that. 

There is also a cost to run an app — high-cost hardware, software licensing, and database licensing. All have a compelling, hard dollar cost. You need a business case with a clear view of what you want to do, where you want to do it, and how long it will take, and make sure you can have a clear discussion about business value. 

Most modernization projects that have been successful have this full visibility into the advantages. That said, you have business-critical apps that need to keep running, and you can’t just flip the switch. There are a variety of best practices, like the Strangler Fig Pattern, to keep monolith alive while you modernize. It’s a risk-averse, programmatic, sequential way to move from an old pattern to a new one without having a drop in services. 

Q: How long does assessing technical debt risk take?

Bob: vFunction Assessment Hub is relatively quick, typically focusing on a core set of apps you determine are worth modernizing, that can be a handful or it could be hundreds that have a business value. Our Assessment Hub is an affordable, efficient and automated way to build the business case, taking less than an hour for one app or a few weeks for a larger application estate. 

Q: Once you understand the extent of your technical debt, then what?

Bob: vFunction Modernization Hub analysis is automated, but it involves active interaction with an architect through our Studio UI to refine and refactor the architecture. But a process that might take years to complete without vFunction takes only weeks or months with it and with higher-quality results. With Modernization Hub, you have the data and the understanding of how the architecture and dependencies improve or not with each change. 

Q: What are the costs and time associated with modernizing with Modernization Hub?

Bob: The cost and time are based on the scale of the app, so the Assessment Hub will tell you how long it will take. Some apps have millions of lines of code and tens of thousands of classes, so it takes more time. Our pricing and estimations are based on complexity and the number of classes within the app. With our service extraction capability, it’s a full, end-to-end cycle. We find a major value in visualizing the recommended service topology and refining the architecture from there. 

Q: What is the role of the architect here?

Bob: The architect stays in control, but we guide them. They can decide if they want to split out the services or combine them. We facilitate those decisions and provide guidelines and recommendations, but it’s important to use vFunction as the expert tool that helps them do their job more efficiently and clearly with observability and control on their end.

Q: Is modernization a one-and-done sort of thing?

Bob: It’s not. It’s continuous because there are always changes to the architecture and apps. But vFunction Continuous Modernization helps you baseline your architecture, monitors the metrics you need to track, and detects critical architectural drift. We alert you when something exceeds an expected baseline or threshold — anything that causes a spike in technical debt that needs to be controlled. Then, the architect can go back into the Modernization Hub to fix it. 

Q: Finally, what’s the ultimate lesson we can learn from the Southwest Airlines, FAA, and Twitter failures?

Bob: The fact that technical debt has worked its way into the business press and everyday conversation is not a good thing. It’s a warning to every business, and now that it’s so public, your business leaders will likely start asking how technical debt is being addressed. 

If you’re not tracking your technical debt, you will miss the warning signs. You’ll start to see slowdowns and glitches, business failures, and failure to meet business expectations. Every application owner is assuming and hoping these issues won’t snowball into a catastrophic failure down the line, but we are seeing more of these happening. 

It’s easy to understand it if you think of it in health terms — like an early sign of a heart attack is a stroke. If technical debt is truly something that can have a critical effect on your business, and you see warning signs, at least measure, monitor and prepare. You need a physical for your application estate. We are like an EKG, identifying where the problems are and their extent. You don’t want to wait until fixable issues grow into a catastrophe like they did with Southwest Airlines. Be proactive now, and you can proactively manage technical debt and control the risk so that it won’t stop the heart of your operations. 

Bob Quillin not only serves as Chief Ecosystem Officer at vFunction but works closely with customers helping enterprises accelerate their journey to the cloud faster, smarter, and at scale. His insights have helped dozens of companies successfully modernize their application architecture with a proven strategy and best practices. Learn more at vFunction.com.

Related Posts: