Category: Uncategorized

How Will AI-Augmented Software Engineering Impact Our Businesses?

ChatGPT, the most advanced conversational AI chatbot yet publicly revealed, is taking the world by storm. Millions of ordinary people are using it, and most are highly enthusiastic about its ability to create human-like written content that helps them in their daily lives. Software professionals, too, are taking note of this new kid on the AI block. For them ChatGPT is a portal into a future in which AI-augmented software engineering will inevitably disrupt traditional approaches to coding, maintaining, and updating the software applications modern businesses depend on.

As an article in the Harvard Business Review puts it,

“ChatGPT Is a Tipping Point for AI … The ability to produce text and code on command means people are capable of producing more work, faster than ever before… This is a very big deal. The businesses that understand the significance of this change — and act on it first — will be at a considerable advantage.”

In this article, we’ll use ChatGPT as an up-to-the-minute example of what AI-augmented software engineering can accomplish.

What Is AI-Augmented Software Engineering?

According to IEEE (the Institute of Electrical and Electronics Engineers),

Augmented intelligence is a subsection of AI machine learning developed to enhance human intelligence rather than operate independently of or outright replace it. It’s designed to do so by improving human decision-making and, by extension, actions taken in response to improved decisions.

AI-augmented software engineering applies the augmented intelligence concept to the realm of software development, maintenance, and improvement. In its practical application, the term describes an approach to software engineering that’s based on close collaboration between human developers and AI-enabled tools and platforms that are designed to assist and extend (but not replace) human capabilities.

To illustrate the importance of the collaborative aspect of augmented intelligence, the IEEE report cites the example of one clinical study aimed at detecting lymph node cancer cells. In that study, the AI system used had a 7.5 percent detection error rate. The error rate for human pathologists was 3.5 percent. But when the human pathologists and the AI system worked together, the error rate was just 0.5 percent.

What Can Today’s AI Do?

Software professionals around the world are now using ChatGPT to gain first-hand experience with the ways an advanced AI platform can extend the capabilities of application developers. Their reports highlight the benefits modern AI tools can provide for software engineering teams:

  • Writing New Code: According to one report, ChatGPT has “shocked” developers with its proficiency at writing code. As this user puts it, if you tell ChatGPT to do so, “it will happily create web pages, applications and even basic games” in any of the programming languages (such as C, Java, HTML, Python, etc.) that are widely used today. But, as we’ll see below, today’s AI still has some significant limitations in this area.
  • Explaining and Documenting Existing Code: One of the greatest benefits of ChatGPT is that you can give it a piece of existing code, ask what the code does, and receive a lucid, accurate explanation written in plain language. For developers working with legacy code, which is often highly opaque because of inadequate documentation, that’s a huge benefit that only an advanced AI platform can provide. In fact, the explanations the AI engine provides are so clear and well-written, they can also serve as a great learning tool for less experienced developers.
  • Enhancing QA and Defect Remediation: ChatGPT can analyze a piece of code to detect and explain bugs that human developers may overlook. It can also suggest fixes for the errors it uncovers. Advanced AI platforms can automate software testing to a significant degree, substantially shortening the development cycle.
  • Translating From One Language/Environment to Another: Developers can present ChatGPT with code written in one language and have that code accurately translated into the syntax of another language with which the coder may be less familiar.
  • Turbo Charging Low-Code/No-Code Development: Low-Code/No-Code (LCNC) is already having a big impact on operations in many companies. It allows business users, who may have few technical coding skills, to automate processes in their workflows with minimal assistance from IT professionals. The ability of ChatGPT to produce working code based on natural language inputs democratizes software creation even more. It is, as one observer put it, LCNC on steroids.

Related: Using Machine Learning to Measure and Manage Technical Debt

What Does AI-Augmented Software Engineering Mean for Developers?

A key aspect of the IEEE definition of augmented intelligence is that it affirms that the purpose of AI-augmented software engineering is not to replace the human element but to assist and enhance it. Jorge Hernández, a Sr. Machine Learning Research Engineer at Encora, explains how this works:

AI-augmented software development helps reduce the cognitive load throughout the software development lifecycle … by helping manage the complexity of the problem, allowing workers to off-load routine tasks to the AI so that they can focus on the creative and analytical tasks that humans do best.

Today’s AI can relieve developers of many tasks that are either mundane and repetitive or, on the other hand, forbiddingly intricate and complex, freeing them to focus on higher-level responsibilities such as architecture and overall design.

For example, by intelligently selecting generic or boilerplate code from the open-source universe and adapting it to the current use case, an AI-augmented coding assistant can relieve developers of the more trivial aspects of the software development process. As technical consultant Rob Zazueta says, “I can take that, modify it to fit my needs and cut through boilerplate stuff quickly, allowing me to focus on the more intensive kind of work the AI is not yet ready to handle.”

Similarly, by uncovering, explaining, and correcting bugs in complex legacy code, an advanced AI platform can save human engineers hundreds of hours of analysis and remediation time.

Deepak Gupta, CTO at LoginRadius, summarizes the impact of AI-augmentation in software engineering this way:

“Artificial intelligence is revolutionizing the way developers work, resulting in significant productivity, quality and speed increases. Everything — from project planning and estimation to quality testing and the user experience — can benefit from AI algorithms.”

Limitations of AI

It’s important to remember that AI engines don’t really think—they simply use patterns they discern in their training data to predict an appropriate response based on the parameters they are given. So, they don’t understand the real-world context of the issues they address. As a result, they can make egregious errors that would be obvious to a human.

For example, although the ability of ChatGPT to turn natural language descriptions into code is extensive and impressive, it has significant limitations in producing usable code on its own, especially for non-trivial coding problems.

When given complex coding tasks, ChatGPT sometimes produces code that, as one software expert put it, “may work but may also almost work.” That is, the code may look as though it does what the developer specified, but have non-obvious flaws that make it unreliable. Needless to say, such code is the stuff of developers’ nightmares.

So, we’re nowhere near the point where software engineering can simply be turned over to an AI coding engine. But what AI-enabled platforms can do is produce code that human developers can use as a starting point, saving time and avoiding many of the bugs that humans themselves inevitably introduce into their code when they start from scratch.

Related: Common Pitfalls of App Modernization Projects

What AI-Augmented Software Engineering Means for App Modernization

Because legacy codebases may be huge (often five million lines of code or more), and may contain embedded dependencies and hidden functionalities that are not obvious to the human eye, refactoring a monolithic legacy app to a microservices architecture is a task that’s normally too complex and time-consuming to be done manually. But when AI and human developers collaborate, app modernization can become a much quicker and safer process.

First, AI can provide insight into legacy codebases that human engineers would struggle to acquire on their own. With AI, the process of analyzing legacy apps to determine if and how they should be modernized can be substantially automated, and the ability of the AI platform to almost instantly assess what a legacy code module is doing and how it functions can save engineers hundreds of hours of analysis time.

AI can significantly streamline and automate the creation of microservices, giving engineers and architects the ability to understand the most effective entry points into a microservice and decide appropriate domain boundaries.

AI can also allow developers to prototype various solutions, helping them understand the practical implications and benefits of each approach. Without AI, developers are effectively working in the dark, spending time and taking risks to experiment. AI brings visibility to the entire process—to what’s been done, what’s being done now, and the probable outcomes of specific “what if” scenarios.

In general, the task of modernizing legacy apps is too complex for humans alone to handle, while AI systems lack the strategic and contextual understandings required for formulating optimal business solutions. But when engineers and architects work collaboratively with a modern, sophisticated AI assistant, they can modernize applications far more quickly, with greater confidence and less risk.

Applying AI-Augmented Software Engineering To App Modernization

The vFunction platform is specifically designed to apply AI augmentation to the task of legacy application modernization, transforming a process that, when done manually, is complex, time-consuming, risky, and costly. With its advanced AI capabilities, vFunction can automatically analyze huge monolithic codebases, and substantially automate the process of refactoring them into microservices. vFunction speeds up the modernization process by a factor of 15 or more while reducing overall costs by at least 4X.

If you’d like to see how AI-augmented software engineering can lift your legacy app modernization programs to entirely new levels, schedule a vFunction demo today.

Leadership Buy-in on a Continuous Modernization Strategy

It Takes a First-Class Seat: Demonstrating the Benefits of  Continuous Application Modernization 

Continuous application modernization allows organizations to address legacy code in iterative steps. It adheres to an agile-like methodology where incremental improvements are delivered faster with less risk than traditional waterfall methods, where multiple updates are delivered at once. However, an effective agile environment requires a mindset change. 

Anyone promoting a continuous application modernization strategy must overcome the tendency to resist change. Individuals are hesitant to accept change when they do not understand its impact. That holds for employees as well as executives. In fact, organizational resistance to change is a primary obstacle to implementing new processes. 

Executive buy-in or the lack of leadership support contributes to an organization’s fear of change. Unless management participates in the process, employees hesitate to invest their energies because they do not see a benefit. The proposed change is another “fad” that will be replaced in a month or two. Why invest time and energy in a process that will disappear in a few months?

To ensure project success, IT must first get executive support. Without it, IT departments will encounter employee resistance. So how does IT achieve executive and employee buy-in?

Before you talk about ROI, risk assessments, and budgets, consider the psychology of change. Change management gurus and psychologists cite fear of failure, fear of the unknown, and fear of job loss as reasons for resisting change. However, resistance indicates the reward is not worth the risk.

For example, you have an aisle seat in coach on a full flight from LA to JFK. Just before take-off, the airline offers you a free upgrade to a first-class window seat. Are you going to turn down the upgrade because it requires a change? Probably not.

The same psychology applies to leadership buy-in. If you want executive support, you need to offer them a first-class seat. The question is, how do you do that?

Related: Creating a Technical Debt Roadmap for Modernization

Leadership Support for Continuous Application Modernization Strategies

Gaining leadership support means presenting information that demonstrates to executives that using a continuous application modernization strategy is in the company’s best interests. The rewards of a more agile development environment offset any risks associated with the strategy change. The secret to success is how the data is used to achieve buy-in.

Begin with Data

Companies have more projects than resources. It’s the leadership’s responsibility to decide which projects have priority. They want data to help with decision-making. Executives want to know that the appropriate due diligence has been conducted to determine the project’s scope and cost. 

The first step in proposing continuous application modernization is quantifying technical debt. IT must assess the time, costs, and scope to determine debt accurately. The process is time-consuming unless automated tools are used. For example, IT can manually calculate defect ratios by tallying old and new defects, or they can use bug-tracking software to store the information. Other metrics for assessing technical debt include:

  • Code quality using coding standards
  • Completion time using hours to correct a reported problem
  • Rework efforts using bug-tracking software
  • Technical debt ratio comparing the cost to fix problems versus the cost of building new

While these methods can reduce the assessment time, they still require IT to perform calculations and analysis. Fully automated solutions can eliminate much of the collection, analysis, and calculation required to present a business case for executive buy-in.

AI Modernization Platforms

AI-guided solutions can reduce the assessment time from weeks to hours with little to no IT involvement. For example, AI-automated assessment tools can analyze Java code in less than an hour, providing information on the following:

  • Technical Debt. Identify sources and measure negative impact if not addressed.
  • Complexity. Quantify the complexity of each application and prioritize the apps to modernize first.
  • Metrics. Assess metrics for return-on-investment analysis

Using a fully automated platform enables IT personnel to spend less time collecting data and more time focusing on the benefits that will deliver buy-in. 

Define the Process

Many outside of IT may not be familiar with the concept of continuous application modernization. They may not understand how the process differs from more traditional approaches to software development. Part of the buy-in process means explaining how the change impacts an organization.

Suppose a company has a payment processing solution that needs updating to support a different payment type. The project has a three-month deadline. As the project progresses, decisions are made to leave the architecture the same because recasting the payment type as a microservice would delay the release date.

After the software is released, the company can continue with its existing architecture, adding to its technical debt to be addressed decades in the future. Or, the company can add moving the payment type to a microservice to its list of modernization tasks and assign it a priority to decrease the technical debt as quickly as possible. 

The ramification to the company is the allocation of resources to work on modernization tasks as part of its routine workflows. That may mean fewer resources are available to work on other projects. For many, this process may seem to be a negative, taking away valuable resources to fix problems with software that is working just fine.

That’s when focusing on the benefits comes into play. It’s these benefits that will convince leadership that modernization is in the company’s best interest.

Focus on Benefits

Achieving executive support for continuous application modernization means addressing change in terms of benefits. It’s about using the data to inform the discussion on why continuous application modernization is the right strategy for an organization. Here are some tips on how to use data to demonstrate how modernization is the best choice.

Meet Key Business Objectives

Instead of talking about money and timelines first, talk about impediments to meeting business goals. Take the payment type example.

Suppose accepting crypto payments is a business objective. Start with what is needed to support that payment type, highlighting the technologies the existing architecture cannot support. Using the data from an automated analysis, explain the time and cost of modernizing the entire application. Be sure to note that the timeline assumes the use of automated tools.

Part of the discussion should include the impact on daily IT operations. When modernization occurs in a waterfall-like approach, all available resources will be consumed in the effort, leaving minimal IT staff to address everyday issues. Historical data should provide information on what percentage of staff time is used in system maintenance activities and support. 

Contrast the waterfall scenario with a continuous application modernization strategy that uses iterative development. With a prioritized analysis, discuss a timeline where microservice development is integrated into standard IT operations. Then, compare the timelines. Which approach is more likely to meet business objectives with less disruption and at a lower cost?

Improve Agility

Comparing timelines opens the discussion to another critical business objective — agility. Leadership is well aware of agility’s value to a company’s long-term viability. What they don’t know is how to achieve it. That’s where a continuous application modernization strategy comes in.

Related: IT Leader Strategies for Managing Technical Debt

Consider compliance updates using the payment example. Payment networks have annual or semi-annual updates that must be completed as scheduled to remain in compliance and continue payment processing. What happens when an update is required in the middle of modernization? 

Using data from an automated analysis, IT can determine which microservices are impacted by the updates. They can look at the project schedule and determine the impact on deliverables. A low-priority microservice may need to be higher on the list. If the modernization assessment presents the data per microservice, adjusting the timeline should be straightforward with little impact on the overall schedule.

A waterfall-based strategy could lead to difficult decisions. 

  1. If the modernization project is months away from completion, updates must be made to the existing code to remain in compliance. Updates will also need to be added to the modernization code to ensure backward compatibility. When the new code is delivered, the updates may require retesting or recertification since it is a new code base.
  2. If the project is close to completion, the updates can be added to the new code. The existing code would remain untouched. If the new code is not ready as anticipated, the company is out of compliance and risks penalties and fines. The added updates may extend test times. 

The compliance example illustrates continuous modernization’s agility. While the changes may impact overall delivery schedules, the strategy delivers the agility needed to ensure operations with minimal risk.

Achieve Buy-In for Continous Application Modernization

Focusing on benefits still requires a detailed analysis of what modernization will take. It needs the same data as is needed if using a more traditional time and materials approach. The difference is in the seat location.

In coach, the executives struggle to see where the plane is going and the turbulence ahead. They cannot reduce the noise to decide what is in the company’s best interests. In first class, leadership encounters less noise and has a clearer perspective of the plane’s path. They are less resistant to change because they see the long-term advantages of a continuous application modernization strategy.

vFunction’s Assessment and Modernization Hub provides organizations with data-driven analysis of what is needed for modernization. The data can then be used to get leadership buy-in when focusing on what facilitates change. Contact us today to get started gaining leadership support for your modernization projects.

Getting Value from Modernizing Your Business Application in 2023

As we settle into 2023, today is the perfect time for your organization to start considering modernization in the year ahead. it’s essential that organizations regularly upgrade their operations to remain competitive. Keeping systems current and up-to-date also facilitates stability, growth and higher levels of success.

That said, the variety of options available for app modernization can make it difficult to determine the best course of action. At the same time, the fact that there are so many options suggests there are many innovative minds helping make the process faster and more straightforward. So, if you’re among the many CIOs, CTOs, senior developers, application architects, and system integrators pondering ways to modernize your apps this year, this article is for you.

The Case for Modernizing Your Business Applications

Propelling most digital transformations is the acknowledgment by executive-level managers of the pivotal function of technology platforms in accelerating growth. Of course, most digital transformation projects are focused on migrating infrastructure and apps to the cloud. Nevertheless, indispensable legacy systems such as enterprise resource planning (ERP) systems, mainframe systems, Lotus Notes, and Microsoft’s SharePoint have been generally excluded from such projects.

While it may seem counterintuitive for organizations to hold onto legacy systems that are problematic and costly to maintain, these organizations value the familiarity and dependability of their legacy systems. But those seemingly beneficial qualities are greatly negated by a lack of features and flexibility. In addition, most legacy systems require specialized skill sets to manage them—skill sets that are steadily diminishing throughout most industries.

Many organizations that delay app modernization presumably find it more challenging to assess its value compared to measuring the value of other business priorities. However, modernizing internal legacy apps greatly enhances customer experience and business operations. 

Incorporating microservices, for example, facilitates persistent integration and ongoing delivery. This makes trying out new ideas and conducting rollbacks quick and painless. The microservice architecture achieves this by extending cloud support, though not necessarily exclusive to cloud computing.

According to a recent HubSpot article, cloud integration platforms break down software silos, improving collaboration, increasing visibility, and enhancing cost control. HubSpot reported that business departments using individual applications and services cause silos to develop quickly. 

Left unchecked, these department disconnects are increasing. Recent statistics indicated by HubSpot show that large organizations typically use around 175 cloud-based applications, while smaller organizations use 73. To bridge this digital divide and allow IT teams to monitor and manage heterogeneous apps from a centralized system, organizations are seeking to modernize their legacy systems.

What To Expect in 2023 for Technology and Cloud Modernization

Business relevancy wasn’t an issue many conventional tech leaders invested much thought or energy into at one time. Before digital modernization became a hot-button topic, organizations addressed business and tech alignment in two distinct ways:

  • Horizontally. Organizations arranged IT teams according to their various skill sets, such as coding, custom development, and business analytics, among others. Additionally, they would loosely support all necessary operational areas.
  • Vertically. Organizations dedicated IT teams to aid various departments, such as finance, marketing, sales, security, inventory, etc.

Over the past couple of years, many enterprise IT teams began developing nearly diagonal models, employing the most suitable of both strategies. In a diagonal model, the cloud provides a horizontal technology foundation accessible to any employee. Additionally, organizations are able to build on top of that foundation as they see fit. This restructuring process compels organizations and IT teams to cooperate in developing better ways of operating across the board.

Pivoting From Cloud Migration Toward Cloud Modernization

Over the past decade, cloud computing has gone from being merely a trend to being a megatrend. However, trends are rarely a necessity and are most typically short-lived due to their inclination toward style rather than functionality. Many experts have long argued that cloud computing transcends far beyond a trend and is a necessity.  

When it comes to cloud computing, as long as there is an Internet, it will only become more of a necessity for public and private users. In other words, unless another Carrington Event occurs in our lifetime, cloud computing is poised to become as integral to organizational operations as electricity.

Deloitte reported at the end of 2020 that nearly 70% of CIOs viewed cloud migration as one of their top IT spending drivers in 2020. Deloitte supported this by noting that the increased sales of data centers in Q2 2020 increased the revenue growth of the three biggest semiconductor companies by 51%. 

Nonetheless, while companies find it easy to invest in data centers, sifting through the multitude of options for implementing data migration complicates the process. But it doesn’t end there.

Many organizations finally bit the bullet in 2022 and made a 2023 New Year’s resolution to migrate to the cloud. As most business leaders know, as soon as you have the funds and resources to upgrade an aspect of your operation, the next innovative solution hits the market. 

Trying to keep your company up-to-date starts to feel more like a game of Whac-A-Mole. As we’ve written in the past, it will only become increasingly essential to think beyond cloud migration throughout the 2020s.

By the time the 2030s roll around, many businesses will find it nearly impossible to provide consumers with any sort of value without cloud modernization. Cloud migration is only a step toward the ultimate goal, which is modernization and future-proofing. Replicating legacy technology in cloud environments may seem sufficient at the moment, but this will certainly wain the closer we get to 2030.

2023 will be a year that more leaders realize that even minor modernization projects require the adoption of cloud-native technologies. Of course, microservices are requisite for adopting cloud-native technologies. Once the growing pains of all this have subsided, the scalability and flexibility provided by cloud modernization will help enterprise IT teams innovate, develop, and deliver both faster and more efficiently. Not only will your customer experience have notable improvements, but your organization will be able to collect data across the totality of your business.

Shifting to Data-Driven Decision-Making With Cloud Computing

It’s safe to say that any forward-thinking enterprise wants to evolve into a dynamic data-driven company. As they say, the wars of the future will revolve around data, not oil. We already see this unfolding with private data firms like Palantir analyzing the Ukraine conflict. The U.S.-based company develops software that coordinates satellite imagery to assist the US military in monitoring conflicts and various threats globally.

“Palantir’s software is crucial during bad times for governments to handle the massive amounts of data they need to make a change,” the firm stated in a self-published press release. What pertinence does this have on the topic at hand, you might be wondering? 

If data holds such value to governments and possesses enough power to influence geopolitics, imagine the benefits data offers companies that use it strategically in business. This is why we believe that the algorithmic enterprise will no longer be an ideation but a tangible reality.

Pivoting from gut-feeling analytics to purely data-driven analytics requires a robust, dynamic cloud foundation. The cloud offers the high level of computing power required for enterprise-level analytics. That computing power comes from awesome innovations such as unsupervised artificial intelligence, data mining, machine learning, and predictive models. In addition to that, cloud platforms assist enterprise leaders in democratizing data access.

Most importantly, various types of data will become available to more employees and departments besides IT. The end result will be your teams tapping into treasure troves of data-driven sagacity leading to them making the best decisions most if not all of the time.

Provide Value Points by Modernizing Your Business Application

Gartner published a report predicting that spending on cloud projects will be over 50% of IT budgets by 2025 compared to 41% in 2022. This increase in cloud investment has resulted in many more organizations launching app modernization projects. If you’re considering, planning, or beginning to modernize your business application, you will want to ensure that you get the most value from your modernization efforts.

Michael Warrilow, research vice president at Gartner, stated:

“The shift to the cloud has only accelerated over the past two years due to COVID-19, as organizations responded to a new business and social dynamic […] technology and service providers that fail to adapt to the pace of cloud shift face increasing risk of becoming obsolete or, at best, being relegated to low-growth markets.”

In today’s cut-throat market, being relegated to a low-growth market is often a warning sign of becoming obsolete. To prevent this, it’s important to extract as much value as possible from modernizing your business application. 

In other words, it might not always be enough to go through the process without establishing value points particularly critical to your company. Modernization shouldn’t be approached as merely doing the minimum to get by. The value points enterprises most commonly concentrate on include improved scalability, increased release frequencies, better business agility, boosted engineering velocity, and more adequate levels of security.

The vFunction Architectural Observability Platform helps many organizations properly formulate their modernization strategies. It’s a purpose-built modernization assessment platform for decision-makers, empowering them to evaluate technical debt, risk, and complexity. Request a demo today to see for yourself how we can assist with your transformation.

Why is Container Management Important for Planning?

As technology continues to evolve, businesses are increasingly turning to microservices and containerization to improve their IT operations. Container management is the process of overseeing and maintaining the containers that hold the microservices in a distributed system. It enables the efficient deployment, scaling, and management of microservices. 

By understanding container management, you can better navigate the complexities of microservices and gain insight into the best practices and tools to deploy and maintain them efficiently. Defining what it is, why it’s used, and why a management strategy is needed will lead to more effective operations and better overall performance of your IT systems. Here’s what you need to know about container technology.

The Role of Virtual Machines in Container Technology

Container technology grew out of virtual machine partitioning in the 1960s. Virtual machines (VM) let multiple users access a computer’s full resources through a single application. VMs were used to install multiple operating systems and applications on a single server.

Unlike VMs, containers have shared resources. Many deployments used multiple microservices in a single container. That complexity hindered container usage until automated management tools were developed. Now, system architects have access to solutions that make containerization software more reliable. These tools make up what is known as container management.

In contrast to VMs, containers require fewer resources and are faster to initialize, but those capabilities often make deployment more complex. When virtual machines were first introduced, developers used them to install multiple operating systems and applications on a single physical server, each in their own isolated environment. 

Instead of having multiple servers running different development environments, IT could maximize the use of physical servers through VMs. However, they were resource-heavy and slow to spin up.

Security was not a major concern when VMs first hit the market, so their deployment allowed unrestricted access to any environment running on the same device. In the 1980s, developers began looking at ways to restrict access to improve system security. Limiting file access to specific directories was the start of container-style processes. 

The Growth of Container Technology

Docker released its container platform in 2013. It was a command-line, open-source solution that made container deployment easy to understand. It was well-received, and within a year, the software was downloaded over 100 million times.

In 2017, Google’s Kubernetes became the default container tool as it supplied scheduling and orchestration. When Microsoft enabled Windows Server to run Linux containers, Windows-based development could take advantage of the technologies. That change opened the door to more container-based deployment.

As more organizations moved to the cloud, container management became a concern as deployment and monitoring remained complex. Today, many cloud providers offer managed container services to deliver streamlined solutions with independent scalability. Ongoing development looks to incorporate AI technologies for improved metrics and data analysis, leading to error prediction, incident resolution, and automated alerts. 

RELATED: Monoliths to Microservices: 4 Modernization Best Practices

According to Statistica, the container market is expected to reach $5 billion US dollars in 2023 with a year-on-year growth rate of 33%. Whether considering or expanding the use of container technologies, organizations should develop a strategy for how to incorporate container management into their modernization plans.

To understand the role this technology plays in a company’s efforts to reduce its technical debt, IT staff should evaluate the landscape, beginning with what container management entails.

What is Container Management?

Containers are virtual operating systems that allow applications, their libraries, and dependencies to operate in one deployable unit. Often used in conjunction with microservices, containers enable multiple applications using the same operating system kernels to function as self-contained code. Containerization makes for lighter-weight implementations with greater interoperability than VMs. 

However, containers have to be managed. They have to be deployed, scaled, upgraded, and restored. As more companies look to cloud-native containers, managing hundreds or thousands of them becomes overwhelming. That’s why the market for container management continues to grow. 

Gartner defines container management as:

A set of “tools that automate provisioning, starting/stopping and maintaining of runtime images for containers and dependent resources via centralized governance and security policies.”

Container management solutions may be platforms that operate as software-as-a-service or software solutions installed locally. Container management enables developers and administrators to realize the benefits of containerized microservices while minimizing potential errors.

Why Use Container Management?

Container management tools simplify the deployment and monitoring of containers. With these tools, IT can stop, start, restart, and update clusters of containers. Automated tools can orchestrate, log, and monitor containers. They can perform load balancing and testing. 

IT departments can set policies and governance of a containerized ecosystem. As more organizations move to a container infrastructure, they will need automated tools to manage large environments that are too much for staff to maintain.

Other benefits of container management tools are:

  • Portable. Containerized applications can be moved from the cloud to on-premise locations. They can move to third-party locations without integration concerns, as the technology does not depend on the underlying infrastructure.
  • Scalable: Microservices can scale independently with minimal impact on the entire application.
  • Resilient: Different containers can hold applications so that a disruption in one container does not result in a cascading failure.
  • Secure: Unlike VMs, containers isolate applications. Gaining access to an application in a container does not automatically give bad actors access to others.

Containers are lightweight with minimal overhead, making for fast deployment and operation. However, turning monolithic legacy code into containerized microservices can introduce vulnerabilities unless carefully orchestrated. Container management is not without its challenges.

What Are Container Management Challenges?

Management complexity and the lack of a skilled workforce are the primary obstacles to containers and their management. For example, containers only exist when needed. When a container is no longer required, it spins down, taking all of its data with it. Maintaining persistent data across containers poses significant problems if not well managed. 

Complex Container Management

What happens when a container needs data that is no longer available? How can developers ensure data is persisted? How can they identify these errors with hundreds of clusters of containers to manage? Without container management, it’s almost impossible for developers and architects to deliver an error-free implementation.

Application isolation protects against VM-type vulnerabilities; however, orchestrators can expose containers to attack. APIs, monitoring tools, and inter-container traffic increase the attack surface for hackers. Using security best practices, such as only using trusted image sources, reduces authentication and authorization weaknesses. Closing connections minimizes the number of entry points for bad actors. 

Lack of Skilled Staff

The most significant challenge is the lack of experienced staff. Organizations need a thorough understanding of the scale and scope of a containerization effort. They need a roadmap that outlines how existing code connects and communicates to ensure relationships are retained. Since containers can run in multiple environments, architects must define the business objectives behind the move to modernization to ensure the right infrastructure is in place.

A recent Gartner survey found that Kubernetes and infrastructure-as-code were the most sought-after skills. Some organizations are developing expertise centers. These groups are tasked with helping in-house staff as needed while using the opportunity to train others to reduce the skills gap. Others are looking for outside sources to help with knowledge transfer. 

With the drive to modernization using microservices and containers, companies need a container management strategy to address the critical challenges that a complex container architecture can present. Without a reasoned strategy, organizations face a technical debt greater than that of legacy code.

Why is a Container Management Strategy Needed?

Container management should be part of any modernization strategy. However, its complexity requires a roadmap that includes the following:

  • Isolation of users and applications
  • Authentication and authorization protocols
  • Resource management
  • Logging and monitoring
  • Long-term container usage
  • Multi-cloud platforms
  • Microservice development

The plan should include methods to address edge computing, cloud implementations, modernization, and training. 

Implementing Edge Computing

With edge computing, the volume of data makes management difficult. Moving all the data to a central location poses performance concerns since much of the data is being captured at the edge. More organizations are looking at building edge infrastructures to prepare the data before sending it to the cloud for processing. 

Containerizing applications at the edge to allow data ingesting and cleaning should be part of any strategy. Containers can improve AI implementations or data-intensive processing and reduce cloud storage costs by placing them close to the data acquisition point.

Refactoring Cloud Implementations

Within the last three years, many organizations have moved all or some of their infrastructure to the cloud. Unfortunately, for many, the move was rushed. Monolithic applications that were migrated (lifted and shifted) to the cloud or simply containerized as a full monolith were brittle and could not scale or be updated without refactoring or rewriting. Making architectural changes would require careful planning by developers and business units to minimize disruption without compromising modernization.

RELATED: Four Advantages of Refactoring that Java Architects Love

Companies cannot assume that simply migrating an application to the cloud removed any technical debt at all. By refactoring cloud implementations, IT departments can ensure that their cloud deployments reduce technical debt.

Modernizing Monolithic Applications

Breaking monolithic software into microservices and containers requires a roadmap that helps developers navigate a way forward. A migration strategy also highlights skills gaps that can be addressed before modernization is underway. As container management tools improve and more organizations move to a cloud-native environment, a clear strategy is needed to ensure that the migration does not increase technical debt. 

Few modernization efforts can be completed at once. The so-called big bang approach increases delivery times and introduces many of the same issues that monolithic structures present. Iterative approaches make managing thousands of lines of code more manageable and reduces the risk of operational disruption. 

The vFunction Modernization Platform and Container Management

vFunction’s refactoring platform for Java and .NET applications can help organizations realize the benefits of container management. Using its platform, organizations can decompose monolithic apps into microservices for container deployment in cloud-native environments. The platform can serve as a tool for developing well-constructed microservices that minimize the risks to container implementations. 

As companies plan for 2023, they will look to containers and their management as a modernization path. Without a complementary modernization strategy, the resulting infrastructure can prove problematic. For help modernizing Java and .NET applications that take advantage of the cloud environment, contact us to see how we can work with you to develop a strong implementation strategy.

Strangler Fig Pattern to Move from Mono to Microservices

You’ve been given the green light to modernize a legacy system; however, the monolithic application provides core functionality to the entire organization. It has to remain operational during the process. You could leave the existing code in production while developing a microservices-based architecture, but you don’t have the resources to modernize and maintain the old application.

If your modernization projects require continuous operation, then applying the Strangler Fig pattern may be the migration strategy to use. It minimizes the impact on production systems and reduces disruption to the entire organization. So, how does the Strangler Fig pattern facilitate the modernization of legacy systems to microservices?

What is the Strangler Fig pattern?

The strangler fig pattern takes its name from the strangler fig tree. The tree begins as a seed in the host tree’s branches. The seeds produce downward-growing branches until they establish roots in the soil surrounding the host tree. Eventually, the strangler fig replaces the host tree.

The Strangler Fig pattern in modernization operates on the same principle  — replace legacy code with microservices gradually until the old code is replaced. Using the Strangler Fig pattern begins with a facade interface. The interface serves as a bridge between the legacy system and emerging microservice code.

What is a Facade?

The facade interface operates between the client side and the back-end code. When the facade sees a client request, it routes the traffic to the legacy system or a microservice. The facade is removed, and the client communicates directly with the microservices. As microservices come online, the facade sends more traffic to the modernized system until the legacy system no longer exists.  

As developers create microservices to replace existing functionality, they can test individual services, minimizing the risk of operational failure. If a problem arises, programmers can address it quickly as they only work with a single microservice. The facade can continue to route requests to the legacy system until the code can be safely released into production.

However, the efficacy of the Strangler Fig pattern depends on understanding the complexity of the existing code and the resilience of the Strangler Fig pattern facade. 

Understanding Code Complexity

Although the Strangler Fig pattern seems like the perfect solution for modernizing code with minimal risk, its success or failure depends on identifying the functions that should be turned into microservices. It means sorting through lines of code to isolate individual functionality. If the codebase is small, the Strangler Fig pattern adds a layer of complexity that is not required.

However, organizations working with millions of lines of code can use the pattern to segment migration and minimize risk. Identifying and managing patterns contributing to code complexity can simplify the modernization process.

Untangle Spaghetti Code

Spaghetti code refers to legacy applications that lack structure. Without a logical construct for the application, developers struggle to understand how the code flows. Fixing spaghetti code often relies on guesswork, leading to miscalculations and operational disruptions.

Remove Dead Code

Dead code refers to code that runs but appears to have no impact on the application’s behavior. Unreachable code, like dead code, complicates the code base. It exists but never executes. Both coding patterns complicate program logic and increase the likelihood of a dependency being missed.

Avoid Code Proliferation

Programmatic intermediaries can help new applications talk to legacy systems, but objects that exist to call other objects increase the codebase without adding value. In most instances, the middle object can be removed.

Why Facades Must be Resilient and Secure

The facade is what keeps a Strangler Fig pattern functioning. It ensures that incoming traffic is routed to the appropriate back end. If it fails, all or part of the production system could fail. If the legacy system is a critical-path application, resiliency must be designed into the facade.  

Design for Resilience

Resilience should be designed into the facade and include capabilities to help with processing surges from batch updates. Legacy systems often use batch updating for maintaining core information. When those files are sent record by record without throttling, systems can be overwhelmed. Designing solutions that operate in separate environments can reduce cascading failures. Resilient architectures can minimize possible failures during migration.

Build in Security

With high traffic volumes, facades can be vulnerable to cyberattacks. Zero trust architecture can address server-to-server vulnerabilities. APIs expose components of a monolithic system to outside sources that lack strong security. When converting to microservices, protection from external attacks cannot be assumed. Security considerations should be included in any modernization strategy.

Related: Strangler Architecture Pattern for Modernization

How to Use the Strangler Fig Pattern with Microservices

Strangler Fig patterns let developers update code incrementally. There’s no need to shut down the legacy system and risk a failover if the code doesn’t work as planned. Instead, the software is refactored, and the legacy system functions are gradually cut off. The iterative process allows development teams to focus on refactoring a service at a time. It eliminates the need for multiple teams to maintain two systems.

Down-size God Classes

A single class that grows to encompass multiple classes makes moving to microservices frightening for even the best developers. The god class and its thousands of lines of code reside across methods, making them available to the entire code base. Moving or deleting from the god class can have unexpected outcomes because of the difficulty in identifying interdependencies.

With the Strangler Fig pattern, place variables in object-based data structures. Store the god-class code in an object that links to the appropriate structure. Use the data structure in the microservice and reflect the change in the legacy code. As modernization progresses, well-structured code replaces god classes until they no longer exist.

Replace Hard-Coded Statements

Hard-coded statements should be replaced with dynamic services. Java-based applications often use hard-coded SQL statements. These statements inhibit code agility. When creating microservices with the Strangler Fig pattern, these statements can be replaced or removed incrementally until all hard-coded statements are removed. The logic in the legacy system can be disabled, leaving dynamic microservices.

Ensure Data Integrity

Most databases use triggers that execute code in response to events. For example, a financial transaction is sent for authorization, and its corresponding data is placed in a database. A reversal of that transaction is received, which triggers the code to revise the transaction status field. To ensure data integrity, design the new system to capture the data from the legacy system. Eventually, the new database will contain the most recent information. Older data can be purged or archived, depending on the data storage requirements.

Why Modernization Needs Continuous Monitoring

Modernization requires continuous monitoring. For example, checking the security designed into each microservice can ensure a robust security posture when the modernization is complete. Here are three areas to act on when moving to microservices.

Create Seamless Communication

Microservices should communicate seamlessly, whether it’s through APIs or other messaging services. Message gateways should handle routing, request filtering, and rate limiting. Including a mechanism to allow retries if a request fails adds resiliency to microservice implementations. Internal communications can be monitored using existing tools. Service mesh technology can also monitor internal communications; however, implementing a service mesh at the start of a modernization process is not recommended as it adds to the project’s complexity.

Build-in Rollback Capabilities

While the Strangler Fig pattern for microservices can minimize the odds of a catastrophic failure, that doesn’t mean that every service will work. Ensure there are built-in mechanisms that automatically roll back to the last functioning state. Each microservice should report its operational health to ensure operational integrity. 

Eliminate Cascading Service Failures

When a microservice fails, the failure should not impact the rest of the application. A circuit breaker pattern can help with service failures. Circuit breaker patterns can act as a fail-safe to prevent cascading failures. If a service fails, the breaker invokes a retry mechanism. It will retry requests a preset number of times or retry the connection periodically until communication is restored.

Automating Modernization

Assessing a modernization effort takes hours of combing through code. It means evaluating code complexities such as god classes, unreachable or dead code, and code proliferation. Planning involves deciding on microservice granularity. Too many services generate overhead and add complexity. Too few can reduce agility and hamper independent operations.

Automating assessments using AI-based platforms can save hours of labor and provide a more accurate result. Static and dynamic analyses evaluate the existing codebase for the following:

  • Technical debt
  • Interdependencies
  • Domain boundaries
  • Code complexity
  • Risk

Through the analyses, automated solutions can quantify the effort needed to refactor an application. With the results, development teams can identify a starting point for modernization. 

Related: Four Advantages of Refactoring that Java Architects Love

Reduce Risk and Increase Success with Automated Refactoring

With the right automated tools, monolithic applications can be modernized and deployed quickly. vFunction’s Code Copy can identify dependencies, divide services by domain, and introduce newer frameworks. It’s a multidimensional analytical approach that tracks code behaviors, call stacks, and database usage.

Using an automated refactoring platform, organizations can quickly convert monolithic code to microservices that fit within a Strangler Fig pattern approach for microservice modernization. They can help identify migration sequences and determine the scope of each microservice. Automated tools can even flag legacy services that should not be turned into microservices. 

Before starting a green-lighted modernization project, contact vFunction to request a demo and see how automation supports the Strangler Fig pattern.

How Opportunity Costs Can Reshape Measuring Technical Debt

As a Chief Information Officer (CIO) or Chief Technology Officer (CTO), you and your team may have spent weeks, if not months, every year measuring the technical debt of your legacy applications and infrastructure. You’ve examined aging frameworks, software defect patterns, code quality, release frequencies, and technical debt ratios. You’ve presented the data to other executives. You’ve even explained the modernization process to the company’s Board. When it comes time to approve the process, everyone hesitates. 

The CEO understands that maintenance costs will increase the older the technology becomes. The Board knows that legacy-system programmers are hard to find. They realize that the old technology will eventually reach its end of life if it hasn’t already. The decision-makers weigh that information against the cost of modernization, the potential operational disruption, and the lost productivity as the migration occurs — and decide to wait.

Sound familiar? Failing to receive approval can be disheartening after investing significant resources in trying to measure technical debt. However, the effort isn’t a total loss. The data can be used to demonstrate the opportunity costs of inaction. After all, business leaders understand the concept of lost opportunities.

Business executives know that reducing financial debt frees funds for investing in new markets or launching new products. Paying off technical debt is no different. With less technical debt, organizations will have the agility to take advantage of future opportunities. They can pivot quickly when unexpected events change the economic landscape. Unfortunately, developers rarely make technical decisions or justify modernization in terms of opportunity costs.

What Are Opportunity Costs When Measuring Technical Debt?

Opportunity cost in economics is the value of the next-best alternative when a decision is made. It represents what is lost when one option is chosen over another. People make either/or decisions every day, most without thinking of the opportunity costs. 

For example, you spend $10.00 on a cup of coffee on your way to work (even if the walk is from one room in your house to another). The explicit opportunity cost is what else you could have purchased with the $10.00. But opportunity costs have an implicit cost as well.

Suppose you could use the money to buy ice cream for yourself and your child. The experience of buying and eating the ice cream together strengthens your relationship. How do you place a price on the experience? Quantifying implicit costs is difficult, if not impossible. However, it can be an essential intangible that can guide a decision.

When developers do not consider opportunity costs, they make decisions that often lead to technical debt that prevents organizations from achieving their business goals. Let’s look at how opportunity costs become technical debt.

How Opportunity Costs Become Technical Debt

Technical debt is the opportunity cost of a prior decision. Most development projects start with three variables — time, cost, and quality. The shorter the timeline, the higher the cost and the lower the quality. Limited resources (costs) can impact the quality and timeliness of the deliverable. Higher quality usually requires more time and money. 

When choosing a variable, most project managers or developers know which variable to select, given the circumstances. If the software delivery is late, the option is coding a solution that doesn’t lengthen the timeline. What doesn’t happen is assessing the opportunity costs of what wasn’t selected. Those neglected opportunity costs can turn into technical debt.

Let’s assume that a legacy system has a series of configuration files except for one module. That module has the data in a table. No one knows why, but they assume other priorities got in the way. Years later, the table needs to be addressed because new data needs to be added. Turning the table into a configuration file is the explicit cost that wasn’t calculated when the decision was made to leave the table alone. 

Using Opportunity Costs to Reshape the Technical Debt Discussion

Whether modernizing legacy systems or reducing technical debt, the goal is to replace or remove code that inhibits an organization’s ability to achieve business goals. As part of the process, it’s assumed that past methods would be revised to create a system that minimizes technical debt. 

Modernization does not always lead to reduced technical debt. According to McKinsey, 20% to 40% of a company’s technology landscape is absorbed by technical debt. IT departments discuss agile development methods but fail to implement practices to minimize debt. They rush to meet sprint deadlines and opt for solutions that increase technical debt. If the debt is not addressed in a later iteration, it continues to grow.

Calculating Technical Debt

The first step in determining opportunity costs is calculating the cost to remove the technical debt. Several methods exist for calculating technical debt, including the following:

  • Code Quality. Look at lines of code, nesting depth, cognitive complexity, maintainability, and similar metrics to measure technical debt. If quality metrics begin to slip, the technical debt will increase.
  • Defect ratios. Compare the number of new defects against fixed defects. A high ratio indicates a growing technical debt, while a small ratio indicates little debt.
  • Reworking. Stable code should require minimal upkeep. Tracking which modules or code segments are being reworked is one way to assess technical debt. If code segments require repeated reworking, the code may be contributing to technical debt.
  • Completion time. Low-priority fixes should not consume significant resources. When developers take longer than expected to address a defect, the code may increase technical debt. Tracking time to complete can identify possible errors of technical debt.
  • Technical Debt Ratio. Calculate the cost of addressing technical debt by comparing what it costs to fix a problem versus rewriting it.
  • Automated Tools. AI-based tools can help identify and quantify technical debt. Using algorithms, the tools can provide an objective assessment of technical debt.

Because measuring technical debt can be time-consuming, AI-based tools that learn as they analyze legacy code can streamline the process. With a less labor-intensive approach, IT departments can spend more time evaluating opportunity costs without sacrificing the detailed analysis of technical debt.

Related: Evolving Toward Modern-Day Goals with Continuous Modernization

Determining Opportunity Costs

Let’s assume that the technical debt for a transaction processing module is $1 million. The module is a core component of the back office that most people view as having minimal impact on customer-facing improvements. When assessing the pros and cons of the modernization project, cost reduction seems to be the primary reason for approval.

Rather than focus on lowering costs when asking for approval, focus on opportunity costs if no action is taken.

Let’s use the transaction processing module. The existing code lacks flexibility, and adding a new transaction type would require rewriting the module. Now let’s assume peer-to-peer transfers will be a new transaction type within two years.

The government may begin regulating peer-to-peer (P2P) payments, which the existing system cannot support. Recent research indicates that 84% of the population has used P2P transfers and that about half of those use the service at least once weekly. Given the US adult population was almost 260 million in 2020, a potential market share of even 5% would equal 13 million people. Assuming 13 million people follow the research, 5.5 million would use a P2P transfer weekly. If the transaction fee were $0.05 per transaction, the lost transaction revenue  — i.e. the opportunity costs — for one year would be almost $15 million.

Suddenly, the discussion isn’t about how much modernizing the module will cost but how much revenue would be lost if it isn’t.

Communicating the Opportunity Costs of Technical Debt

Customers and competitors have forced companies to modernize applications that impact customer experience. Organizations have spent millions on digital transformation without touching the core systems at the heart of their infrastructure. If the legacy systems are working, why risk breaking them?

Executives remember the chaos of past system upgrades or replacements. They hesitate to touch core systems because they fear a repeat experience. They do not see the constraints a legacy system places on their ability to pivot quickly, gain data-based insights, deliver better customer experiences, and ensure sustainability.

It’s Not Just a Technical Problem

Removing technical debt is a business problem. Yet, most businesses view it as technical. IT must change the perception if they want approval to modernize core systems. They must still conduct their due diligence to quantify technical debt and the cost to rewrite, remove, or refactor code. But they must present the information in business terms.

Using opportunity costs as a framework for presenting a business case reshapes the discussion. This effort requires a collaborative approach using subject matter experts (SMEs) who can help identify possible opportunity costs. In most cases, the SMEs know the system limitations but lack the technical knowledge to quantify the scope and cost of the effort.

Together, cross-functional teams can prepare business cases that illustrate the need for modernization beyond cost reduction. They can communicate solutions through opportunity costs that resonate with company executives. Combining resources makes reshaping the discussion possible and the chance of approval much higher.

Use the Right Tools to Start Modernizing the Smart Way

vFunction offers AI-based tools to assess the technical debt of Java and .NET applications. Using vFunction’s Assessment and Modernization Hubs, IT executives can provide a comprehensive analysis of technical debt that forms the basis for an opportunity cost assessment. Contact us to learn how our solution can help reshape your technical debt discussions.

Evolving Toward Modern-Day Goals with Continuous Modernization

Part 4 in the Uncovering Technical Debt Series from Intellyx, for vFunction. [Check out part 1 | 2 | 3 here.]

We’ve dug deep into our technology stacks, uncovering all of the legacy artifacts and monoliths that we could find from past incarnations of our organization. 

We’ve cataloged them, rebuilt them to modern coding standards, and decoupled their functionality into object-oriented, service-enabled, API-addressable microservices.

Now what? Are we modernized yet? 

Well, mostly. There are always some systems that just aren’t worth the time and attention to replace right now, even with intelligent automation and refactoring solutions.

Plus, we acquired one of our partner companies last year, and we haven’t had a chance to merge their catalog with our ordering system yet, so they are still sending us EDI dumps and faxes for urgent customer requests…

We’re never really done with continuous modernization

We’ve compared legacy modernization to the discipline of archaeology. But what happens once archaeologists finish their excavation and classification expeditions? Anthropologists can take over the work from here, interpreting societal trends and impacts even as the current culture continues to evolve and generate new artifacts. 

Similarly, discovering and eliminating uncovered technical debt isn’t a one-time modernization project, it’s a continuous expedition of reevaluation. Once an application is refactored, replatformed or rearchitected, it creates concentric ripples, exposing more dependencies and instances of technical debt across the extended ecosystem, including adjacent applications within the organization, third-party services and partner systems.

Mapping the as-is and to-be state of the codebase with discovery and assessment tools is useful for prioritizing the development teams’ targets for each project phase around business value, but business priorities will change along with the application suite. 

Development teams also get great utility from conducting modernization projects with the help of AI-driven code assessment and refactoring solutions like vFunction Code Copy, but they can realize even greater benefits by retaining the history of what worked (and didn’t work) to inform future transformations.

Not every modernization project works out equally well, but when the hard lessons of modernization feed back into the next assessment phase, this virtuous cycle can become part of the muscle memory of the organization, allowing mental energy to be spent on the most important choices that affect the long-term goals of the business.

Putting technical debt to rest: what to expect

No computer science college student or self-taught coder sets out to spend a career finding and fixing bugs in their code, much less someone else’s – but on average, developers spend at least 30 to 50 percent of their time on rework, rather than innovation and enablement of new features that are perceived to add business value.

Besides the perceived thanklessness of the effort, developers encounter morale-destroying toil when sifting through legacy code, which usually contains lots of class redundancies, recursive methods, poor documentation and a general lack of traceability, resulting in slow progress.

Continuous modernization offers a way out of this thankless job, by preventing technical debt from collecting during each assessment and refactoring phase. 

Here’s some levers teams are pulling for successful long-term improvements:

  • Continuous assessment. The best performing initiatives are not just conducting initial assessments, they are continuously mapping, measuring and observing modernization efforts before, during, and after each refactoring run.
  • FinOps practices basically bring financial concerns and tradeoffs to each modernization selection process or option. IT buying executives have been doing ROI analyses for vendor selection and capex computing investments for years. Now, savvy buyers are getting better cost justification for money spent on modernization, with real financial metrics for resources, employee and customer retention, and delivered customer value. 
  • SLO objectives offer positive motivation for time-and-labor savings and incremental delivery of new services, in comparison to the negative contractual penalties enforced through SLA failures. Developers are incentivized to meet goals such as faster refactoring projects, faster automated deployments, and higher value updates – with fewer hitches and developer rework required.
  • Qualitative business goals are equally as important to success. Better team morale improves productivity and employee retention rates, versus trying to replace high-quality people with new ones that could take months to get up to speed. Developers love working for agile enterprises, where they can test theories and ultimately help the application suite evolve faster in the future to meet changing customer needs.

Trending toward velocity and morale at Trend Micro

Trend Micro is considered a global leader in cloud workload security, with several successful products underneath the banner of its platform – but that didn’t mean their modernization journey started without major headaches. 

Much of their existing product suite, with more than 2 million lines of code and 10,000 independent Java classes, was built before secure API connections between cloud infrastructure and microservices were fully sussed out by the development market. Therefore, earlier customers were more inclined to trust on-premises installations and updates of vital virus, spam and spyware prevention software.

As the modern trends of SaaS-based vendors and cloud-based enterprise applications really hit stride over the last decade, Trend Micro started offering a re-hosted version of its suite under their CloudOne™ Platform banner.

Their initial lift-and-shift of one module’s code and data store to AWS offered some scalability and cost benefits due to elastic compute resources, but as the user base grew, it was becoming harder and harder for product dev teams to get a handle on inter-product dependencies that hindered future releases and updates to meet customer needs. Morale suffered as the replatforming took about a year.

Trend Micro turned to vFunction to identify and prioritize modernization of their most critical “Heartbeat” integration service – with more than 4000 Java classes that take in data from sensors, event feeds and data services across the product suite.

Then using vFunction for modernization, the team was able to visually understand code complexity, with an applied AI for identifying essential, interconnected and circular dependencies, deprecating dead code that would no longer add any actual value for customers going forward.

Through refactoring, they were able to decide which classes should be included as part of the new Heartbeat service, and which should be kept in a common library for shared use across other product modules in the future.

This modernization project took less than 3 months – a 4X speed improvement over the previous project, with successive update deployment times decreased by 90%. Best of all, morale on the team has improved by leaps and bounds.

The Intellyx Take

Continuous modernization offers enterprises a lasting bridge from the monolithic past to a microservices future, but with constant change at enterprise scale, the journeys across this bridge will never really end.

To get to the bottom of the biggest obstacles of modernizing our digital estates, we must first assess and prioritize code refactoring and application architecture efforts around resolving technical debt.

Then, our intrepid teams can venture forth, digging to unearth the artifacts and digital foundations of our organizations, transforming our applications into modular cloud native services, resetting the values of our shared culture, and adapting our architectures to meet the challenges of a global, distributed, hybrid IT future.

Can you dig?

©2022 Intellyx LLC. Intellyx retains editorial control of this document. At the time of writing, vFunction is an Intellyx customer. Image credit: Licensed from Alamy “2001: A Space Odyssey” movie still.

Creating a Technical Debt Roadmap for Modernization

Every company should have a little technical debt, but keeping it below the recommended 5% can be challenging. If companies aren’t careful, the debt can grow to 10%, 50%, or even 80-90% in extreme cases. According to McKinsey, the average technical debt is between 20% and 40%. Repaying that quantity of debt takes planning, vigilance, and continuous attention. It takes building and executing on a technical debt roadmap to balance the needs of the present while paying down the technical debt from the past.

Organizations with financial debt understand debt and its effects. They evaluate different repayment options. They crunch numbers and perform analyses until they’ve created a repayment plan that doesn’t constrict growth but does lower the debt. Executives understand that debt can hamper innovation and growth. With less money to invest in new product development, businesses lose their edge over competitors with less debt. 

Despite their understanding of financial debt, most companies fail to apply the same principles to technical debt. There’s little planning, and decisions are rarely based on data. As a result, enterprises invest in applications that don’t lower technical debt. They devote resources to solutions that are reaching their end of life. Executives struggle to find a place to start.

Without a Technical Debt Roadmap

The 2022 McKinsey study mentioned above included 220 organizations across different business sectors, and it found that the percentage of technical debt a company has correlates with business performance. Of the participating companies, those with the lowest technical debt ratio experienced 20% higher revenue growth than those with the highest debt ratio. 

The study calculated technical debt via a “Technical Debt Score” (TDS) value, and the research found that businesses in the bottom 20% of technical debt, with the poorest TDS, were 40% more likely to cancel or fail to complete modernization efforts.

The top performers spent, on average, 50% more on modernization than those in the lowest percentiles. As they paid down their debt, they remained disciplined in how and where they spent their technology dollars. These companies learned through the process how to use technology to drive innovation and increase revenue.

Related: Eliminating Technical Debt: Where to Start?

Originally, technical debt referred to the consequences of software developers placing delivery deadlines over technical, architectural, or design considerations. It’s what happens when shortcuts in code quality are taken to meet customer requirements. Today’s technical debt has expanded to include any decision that impacts a company’s technology stack.

Determining the size of technical debt is the first step in creating a roadmap. The process should encompass such factors as:

Some of these factors are easier to assess, while others require more intensive analysis, such as code quality. However, all factors should be evaluated to ensure a solid technical debt roadmap.

Defect Ratios

No application is perfect; however, the older the software, the fewer the defects. When the reverse happens, the defect ratio increases. The result is a growing technical debt. If left unchecked, the software may reach a point where it is irreparable. The legacy solution can no longer operate in a modernized construct.

Completion Time

When an engineer or developer is assigned trouble or support tickets, how long does it take to complete them? Focusing on low-priority tickets can identify a growing technical debt. For example, an incorrect value appears in a report. Because the legacy code uses tables instead of databases, the developer has to determine how the software produces the value used in the report. In a monolithic architecture, tracing the value could require combing through hundreds of lines of code. If the value is calculated, the time to resolve increases.

Rework

Reworking the same code segment indicates a technical debt. An employee opens a support ticket for a legacy utility. The assigned developer sees it’s an easy fix and completes it in less than an hour. A week later, another developer accesses the same module to fix a different support ticket. This correction takes a little longer and requires reworking the first fix. When programmers are making fixes to fixes, it’s an indication that technical debt is accruing. 

Code Quality

Code quality requires more in-depth analysis than higher-level assessments, such as total defects or rework statistics. Quality code in relation to technical debt encompasses lines of code, code complexity, inheritance, maintainability, nesting, and couplings. Assessing quality may require tools to look at specific parameters to identify coding flaws.

Architectural Debt

Academic research that started in 2012 with “In Search of a Metric for Managing Architectural Technical Debt”, authors Robert L. Nord, Ipek Ozkaya, Philippe Kruchten and Marco Gonzalez-Rojas created a metric to measure architectural technical debt based on dependencies between architectural elements. They use this method to show how an organization should plan development cycles and roadmap investments that take into account the effect that accumulating technical debt will have on the overall resources required for each subsequent version released. This breakthrough study recently received the “Most Influential Paper” award at the 19th IEEE International Conference on Software Architecture.

Assessing Code Quality

When evaluating legacy code, poor quality doesn’t mean poor programming. It means evaluating legacy systems in terms of today’s coding standards. For example, older architectures created one large monolithic application consisting of thousands of lines of code. The software was designed to run on a single on-premise server. Today’s architecture breaks that large application into smaller microservices better suited to a cloud environment. 

Assessing code quality provides data for resolving technical debt. It uses the following metrics to determine the status of individual applications that can be used to create a technical debt roadmap.

Risk Index

Code dependencies are the bane of modernization efforts. Depending on how long the legacy system has existed, and the number of programmers who worked on it, finding dependencies is like looking for Waldo. They are difficult to find. They may be buried among lines of code, but failing to address them beforehand can become a career-changing move.

Paying back a technical debt should not result in an unexpected application shutdown. With the right tools, organizations can identify dependencies to be evaluated before changes are made, reducing the risk of an epic fail. 

Complexity Index

Think of the complexity index as strings of lights. Is there anything more frustrating than trying to untangle holiday lights? A complexity index identifies how entangled class dependencies are. Like light strings, a few dependencies can be untangled and put to use. Too many dependencies may make it too costly to isolate into microservices. Knowing that upfront makes it easier to assess whether to retain or replace a legacy solution.

Debt Index

 A debt index provides an overall assessment of an application’s technical debt. It combines the risk and complexity indices and compares the results with other applications. Sorting the debt index from high to low can be the start of a technical debt roadmap.

Accepting the debt index without evaluating the complexity and risk values can skew the roadmap. Although complex entanglements often correlate with high risk, organizations must look at the details. After all, quality code is always about the details.

Frameworks

As technologies advance, so do the frameworks. What was considered the leading edge two years ago has become a standard that everyone uses. Frameworks that existed for decades may no longer run on supported operating systems. Dependencies may tie to third-party solutions that do not exist. These aging frameworks pose security risks.

A recent example of security risks in existing frameworks is the Log4j vulnerability discovered in December 2021. This flaw was a zero-day vulnerability to Apache’s logging software. Although Apache had released later versions, many organizations retained the older version for compatibility with existing architectures.

Understanding the weaknesses in older frameworks should be part of every technical debt roadmap. If vulnerabilities can be patched, an older framework may place lower on the modernization list than a newer framework with no available security patches.

Future Proof

Technical debt happens every day. Companies make decisions to make a quick fix to get a business-critical solution back in operation as quickly as possible. Maybe the delivery date for fixing the code doesn’t allow for the necessary rework. A patch is delivered instead, and technical debt increases.

Modernizing existing architecture also requires that solutions are compatible with the latest compilers, libraries, and frameworks. Staying as current as possible reduces ongoing technical debt. With less time spent on lowering debt, businesses can devote more resources to innovation and growth.

Related: Go-to Guide to Refactoring a Monolith to Microservices

Creating a Modernization Roadmap

Part of creating a technical debt roadmap is deciding how best to address the modernization. Options may include refactoring, re-platforming, and rearchitecting. Each approach may be part of an organization’s plan to lower technical debt.

Refactoring turns messy code into clean code. Clean code has fewer complexities, eliminates duplication, and makes for easier maintenance. Messy code can take longer to compile or throw errors that are corrected multiple times by different programmers. With large projects and multiple programmers, it’s easy to lose control over the code. Refactoring cleans code so it can run faster and improve performance.

Replatforming

Replatforming adds functionality to take advantage of cloud infrastructures. It doesn’t modify the application. It can improve an application’s ability to scale and expedite interactions with cloud-based data stores. It can be a cost-effective way to leverage cloud functionality without the cost of replacing or rearchitecting code.

Rearchitecting

Designing an application to operate in the cloud means rearchitecting the application. Developers and engineers are starting from the ground up when it comes to redesigning an existing solution to operate as a cloud-native application. While building an application may be labor-intensive, it may be the only solution to modernizing an existing solution.

Managing Technical Debt

The McKinsey article referred to technical debt as dark matter. It exists. Its impact is measurable, but it can’t be seen or measured. At vFunction, we politely disagree. Technical debt is quantifiable through automated tools that leverage advanced technologies, such as artificial intelligence, to move monolithic structures to microservices for cloud-native deployment.

If you’re interested in creating a technical debt roadmap based on data, request a demo of our platform. Our team is excited to show you how to quantify and lower your technical debt.

Common Pitfalls of App Modernization Projects

In today’s market environment, the ability to quickly take advantage of new technological capabilities is of paramount importance to a company’s ability to maintain or enhance its competitive position. That’s why for many businesses, the modernization of their legacy application portfolio has become not just a high priority but an existential necessity.

The problem such organizations face is that the legacy apps they depend on for some of their most essential business processes actually hinder their ability to keep pace with rapidly changing technological and marketplace conditions. Stefan Van Der Zijden, VP Analyst at Gartner, puts it this way:

“For many organizations, legacy systems are seen as holding back the business initiatives and business processes that rely on them… application leaders must look to application modernization to help remove the obstacles.”

The Importance of Application Modernization

Legacy apps are typically structured as monoliths, meaning that the codebase is organized as a single, non-modularized unit that has functional implementations and dependencies interwoven throughout. Updating such code to interoperate with the cloud-native systems and resources that dominate today’s technological landscape is difficult, time-consuming, risky, and costly.

To overcome this difficulty, organizations must modernize their legacy monolithic codebases to convert them into modern, cloud-native applications that can easily integrate into today’s technological environment. 

Most companies have not only recognized this fact but are acting on it – in a recent study conducted by Wakefield Research, 92% of respondents said their companies are either currently modernizing their legacy apps or are actively planning to do so.

Challenges of Application Modernization

Although application modernization is now considered by many companies to be essential, getting it right can be difficult. According to the Wakefield study, 79% of application modernization projects fail to achieve their goals. 

Application modernization efforts have historically been time-consuming and costly: the typical modernization project lasts 16 months and costs about $1.5 million—and more than a quarter of Wakefield survey respondents (27%) say their projects took two years or more. In a recent survey, 93% of respondents characterized their modernization experience as “extremely or somewhat challenging.”

Related: App Modernization Challenges That Keep CIOs Up at Night

App Modernization Pitfalls to Avoid

Let’s take a look at some common pitfalls that can, if you fail to avoid them, add your project to that 79% failure rate:

1. Inadequate Management Support

Buy-in by an organization’s executive management team is indispensable to the legacy app modernization success. In the Wakefield survey, both executives and architects cited a lack of “prioritization from management” as a major factor that “stopped modernization projects in their tracks.”

If a company’s executive management isn’t on board with the necessity for legacy app modernization and with the ROI such projects can be expected to yield, the budget, personnel, and other required resources either won’t be supplied at all or won’t be maintained at an adequate level. 

When changing marketplace conditions cause the organization to readjust its priorities, modernization projects can sometimes lose the management focus and budgetary support needed for success. To avoid that happening, you must be prepared to make and remake the case for the business utility and ROI of your modernization efforts as marketplace conditions continually evolve.

2. Failure to Adequately Address Cost Concerns

Nearly 50% of both executives and architects in the Wakefield study agreed that securing the needed budget and other resources is the most difficult step in implementing a modernization project. That’s typically because the executives who control the purse strings haven’t been given reliable information that convinces them that the return on the not-inconsiderable investment required for such projects will be great enough to justify the financial and business risks.

By conducting a  data-driven assessment of your legacy application estate, you can provide your organization’s management team with accurate, quantified data that makes the business case for investing the budgetary and other resources an app modernization project will require.

3. Misalignment Between Business and Technology Teams

The Wakefield study reports that 97% of survey respondents expected that someone in their organization would push back against modernization proposals. Such objections commonly occur when various stakeholders are not on the same page. Here are some typical reasons why that may occur:

  • The risk seems too great—Because legacy app modernization involves significant changes to critical systems, there is a definite degree of risk attached to such efforts. In the absence of trustworthy information and specific, well-developed plans that mitigate the risk factors, business and IT stakeholders may be reluctant to accept the risks to business operations that a modernization project represents.
  • Stakeholders fear large-scale change—When legacy applications are modernized, some associated workflows will usually also change. Well-established processes may be altered, and workers may need to be retrained or reassigned. Such developments introduce levels of uncertainty and instability that business stakeholders may be wary of.
  • Stakeholders fear losing their role—The business process changes that arise from app modernization efforts may threaten the traditional roles of some stakeholders or seem to relegate their perspectives and concerns to a lower priority level.

To avoid having such concerns become sources of pushback, stakeholders should be presented with a well-developed, data-driven modernization plan that addresses their unique issues.

4. Failure to Accurately Set Expectations

In the Wakefield study, this was the #1 reason given by respondents who started modernization projects they didn’t complete. Areas of particular concern include unrealistic expectations relating to budget and schedule requirements and anticipated project results such as improvements in engineering velocity and application innovation. 

To overcome this obstacle you need to be able to supply stakeholders with accurate, quantified data regarding the complexity of the task and the timeframe and budget that will be required for completing it.

In addition, you must ensure that your modernization methodology can produce the results you promise. Companies often make the mistake of thinking that just moving an application to the cloud will provide an acceptable degree of modernization. That’s not the case. Such a migration (often called a “lift and shift”) retains all the disadvantages of the application’s monolithic architecture

True modernization only occurs when the app is not only migrated to the cloud but is refactored from a monolith to a microservices architecture. Only then will you fully reap the benefits that make a modernization project worthwhile.

5. Failure to Make Required Organizational Structure Changes

App modernization is far more than just a technical exercise. IBM puts it this way:

“A cultural transformation is also imperative. Organizational roles, business processes and IT skills must evolve and advance for your cloud migration and application modernization to be a success.”

According to Conway’s Law, the organizational structure of a software development group must align with the structure of the application they intend to produce. When that doesn’t happen, an app modernization project is headed for trouble. Software engineer Alex Kondov is adamant that “you can’t fight Conway’s Law,” and supports that declaration with this observation:

“Sadly, often a company’s structure may not support the system it wishes to create. Time and time again, when a company decides that it doesn’t apply to them they learn a hard lesson.”

6. Inadequate Skills or Training

A legacy app modernization project is a complex process that requires a level of expertise many companies don’t have in-house. So it’s no surprise that almost a third of respondents to the Wakefield survey cited a lack of worker skills or training as a key obstacle to success. 

Yet, in today’s job market, hiring and retaining highly skilled software developers can be a time-consuming and costly process. You can reduce this requirement by providing your development team with modernization tools that embody skills your developers may lack.

7. Lack of Intelligent Tools

In the Wakefield survey, this was the #1 reason cited by software architects for app modernization failures. The process of refactoring monolithic legacy apps to convert them to a cloud-native microservices architecture is a highly complex undertaking that may require unraveling tens of millions of lines of code to expose hidden functionalities and dependencies in the codebase. 

Doing this using an essentially manual approach may require many months or even years of developers’ time, and even then the risk of an unsatisfactory outcome is extraordinarily high. On the other hand, the use of a state-of-the-art, AI-enabled, automated modernization tool can speed up the process by orders of magnitude while all but eliminating the risk factors that plague manual efforts.

Related: The Easy Way to Transition from Monolithic to Microservices

App Modernization Starts with Using the Right AI Tech

Many companies attempt to modernize applications using general-purpose design, analysis, and performance monitoring tools. But these have proven to be inadequate for the task. 

What’s needed is a tool that’s specifically designed for modernization, with advanced AI capabilities that allow it to comprehensively analyze monolithic applications, reveal hidden dependencies and functional implementations, and benchmark the levels of technical debt, complexity, and modernization risk associated with each app. It should also be able to substantially automate the process of restructuring complex monolithic apps into microservices.

vFunction provides just such an automated, AI-empowered tool. The vFunction Assessment Hub measures the complexity, technical debt load, and modernization risk of each app. The vFunction Modernization Hub then automates about 90% of the process of refactoring monolithic codebases into microservices. 

To see first-hand how vFunction can help you avoid the pitfalls that have wrecked so many application modernization efforts, request a demo today.

Don’t Let Technical Debt Stymie Your Java EE Modernization

Part 3 in the Uncovering Technical Debt series: An Intellyx BrainBlog for vFunction. Check out part one here. Check out part two here.

When Java Enterprise Edition (Java EE) hit the scene in the late 1990s, it was a welcome enterprise-class extension of the explosively popular Java language. J2EE (Java 2 Enterprise Edition, as it was called then) extended Java’s ‘write once, run anywhere’ promise to n-tier architectures, offering Session and Enterprise JavaBeans (EJBs) on the back end, Servlets on the web server, and Java Server Pages (JSPs) for dynamically building HTML-based web pages.

Today more than two decades later, massive quantities of Java EE code remain in production – only now it is all legacy, burdened with technical debt as technologies and best practices advance over time.

The encapsulated, modular object orientation of Java broke up the monolithic procedural code of languages that preceded it. Today, it’s the Java EE applications themselves that we consider monolithic, fraught with internal dependencies and complex inheritance hierarchies that add to their technical debt.

Modernizing these legacy Java EE monoliths, however, is a greater challenge than people expected. Simply getting their heads around the internal complexity of such applications is a Herculean task, let alone refactoring them.

For many organizations, throwing time, human effort, and money at the problem shows little to no progress, as they reach a point where some aspect of the modernization project stymies them, and progress grinds to a halt.

Don’t let technical debt stymie your Java EE modernization initiative. Here’s how to overcome the roadblocks.

Two Examples of Java EE Technical Debt Roadblocks

A Fortune 100 government-sponsored bank struggled with several legacy Java EE applications, the largest of which was a 20-year-old monolith that contained over 10,000 classes representing 8 million lines of code.

Replacing – or even temporarily turning off – this mission-critical app was impossible. Furthermore, years of effort on analysis in attempts to untangle the complex internal interdependencies went basically nowhere.

The second example, a Fortune 500 financial information and ratings firm, faced the modernization of many legacy Java EE applications. The company made progress with their modernization initiative, shifting from Oracle WebLogic to Tomcat, eliminating EJBs, and upgrading to Java 8.

What stymied this company, however, was its dependence on Apache Struts 1, an open-source web application framework that reached end-of-life in 2013.

This aging framework supported most of their Java EE applications, despite introducing potential compatibility, security, and maintenance issues for the company’s legacy applications.

Boiling Down the Problem

In both situations, the core roadblock to progress with these respective modernization initiatives was complexity – either the complexity inherent in a massive monolithic application or in the complex interdependencies among numerous applications that depended on an obsolete framework.

Obscurity, however, wasn’t the problem: both organizations had all the data they required about the inner workings of their Java EE applications. Both companies had their respective source code, and Java’s built-in introspection capabilities gave them all the data they required about how the applications would run in production.

In both cases, there was simply too much information for people to understand how best to modernize their respective applications. They needed a better approach to making decisions based upon large quantities of data. The answer: artificial intelligence (AI).

Breaking Down the Roadblocks

When such data sets are available, AI is able to discern patterns where humans get lost in the noise. By leveraging AI-based analysis tooling from vFunction, both organizations got a handle on their respective complexity, giving them a clear roadmap for resolving interdependencies and refactoring legacy Java EE code.

The Fortune 100 bank’s multi-phase approach to Java EE modernization included automated complexity analysis, AI-driven static and dynamic analysis of running code, and refactoring recommendations that included the automated extraction of services into human-readable JSON-formatted specification files.

The Fortune 500 financial information firm leveraged vFunction to define new service boundaries and a common shared library. It then merged and consolidated several services, removing the legacy Struts 1 dependency in favor of a modern Spring REST controller. It also converted numerous Java EE dependencies to Spring Boot, a modern, cloud native Java framework.

The Business Challenges of Technical Debt

Both organizations were in a ‘for want of a nail, the kingdom is lost’ situation – what seems like a relatively straightforward issue stymied their respective strategic modernization efforts.

When such a roadblock presents itself, then all estimates about how long the modernization will take and how much it will cost go out the window. Progress may stop, but the modernization meter keeps running, as the initiative shows less and less value to the organization as the team continues to beat their head against a wall.

Not only does morale suffer under such circumstances, but the technical debt continues to accrue as well. In both situations, the legacy apps were mission-critical, and thus had to keep working. Even though the modernization efforts had stalled, the respective teams were still responsible for maintaining the legacy apps – thus making the problem worse over time.

The Intellyx Take

During the planning stages of any modernization initiative, the teams had hammered out reasonable estimates for cost, time, and resource requirements. Such estimates were invariably on the low side – and when a roadblock stops progress, the management team must discard such estimates entirely.

Setting the appropriate expectations with stakeholders, therefore, is fraught with challenges, especially when those stakeholders are skeptical to begin with. Unless the modernization team takes an entirely different approach – say, leveraging AI-based analysis to unravel previously intractable complexity – stakeholders are unlikely to support further modernization efforts.

It’s important to note the role that vFunction played. It didn’t wave a magic wand, converting legacy code to modern microservices. Rather, it processed and interpreted the data each organization had available (both static and dynamic), leveraging AI to discern the important patterns in those data necessary to make the appropriate decisions that resulted in timely modernization results. Considering the deep challenges these customers faced, such results felt like magic in the end.