Why do application modernizations fail 79% of the time? Red Hat and vFunction answered that question at Red Hat Summit 2023 and provided three recipes for success. Co-presented by Markus Nagel, Principal Technical Marketing Manager at Red Hat and Bob Quillin, Chief Ecosystem Officer at vFunction, the session is now available on-demand on the Red Hat Summit 2023 Content Hub (registration required).
Check out the full video to see how-to details behind these recipes including:
Recipe #1: Shift left and use observability, visibility, tooling to understand, track, and manage architectural technical debt
Recipe #2: Use AI-based modernization & decomposition to assess complexity, identify domains & service boundaries, eliminate dead code, build common libraries
Recipe #3: Leverage the strangler fig pattern to shift traffic and workload from monolith to new microservices
To execute the Strangler Fig Pattern, the session described how to use the new Red Hat Service Interconnect (based on the open source Skupper project) to connect the remaining monolith (and possibly other legacy components) with the new microservices on OpenShift.
Why do application modernizations fail? Attempts to modernize monolithic applications into microservices —specifically business-critical Java and .NET apps we depend on every day—can be frustrating and fraught with failure.
In this virtual session, we will:
Identify key reasons for failures from independent industry-based surveys.
Explore 3 proven recipes for successful modernization with case study examples, demonstrations. and deployments to Red Hat OpenShift.
Explore the role of artificial intelligence (AI)-augmented architectural (technical) debt assessment, observability-driven decomposition analysis, and strangler fig pattern rollouts.
Architects and product owners will learn how to use the automated analytics of vFunction AI-driven assessment and analysis of monolithic applications to deploy on Red Hat OpenShift, while significantly reducing the effort and risk of the process.
Can technical debt cause business disasters? Just ask Southwest Airlines: their technical debt caused a shutdown during the 2022 Christmas season that cost the company more than $1 billion, not to mention the goodwill of irate customers who were stranded by the collapse of the carrier’s flight and crew scheduling system.
Or you could ask Elon Musk, whose new Twitter acquisition suffered its own chaos-inducing disruption in March of 2023 due to what one employee described as “so much tech debt from Twitter 1.0 that if you make a change right now, everything breaks.”
As these examples indicate, unaddressed technical debt can indeed pitchfork a company into a sudden and disastrous disruption of its entire operation. That’s why for many companies, addressing the technical debt carried by the mission-critical software applications they depend on is at the top of their IT priorities list.
But identifying and reducing technical debt can be difficult. And that’s especially true of architectural technical debt, which is often even harder to isolate and fix.
In this article, we’ll examine the challenges of architectural technical debt, and see how continuous modernization, along with the “shift left” approach to quality assurance (which helps minimize that debt by beginning QA evaluations early in the development process) can substantially reduce a company’s vulnerability to technical debt disasters.
What is Architectural Technical Debt?
The Journal of Systems and Softwaredescribes technical debt as “sub-optimal design or implementation solutions that yield a benefit in the short term but make changes more costly or even impossible in the medium to long term.” Although the term has generally been applied to code, it also applies to architectural issues. The Carnegie Mellon Software Engineering Institute defines architectural technical debt similarly, in this way:
“Architectural technical debt is a design or construction approach that’s expedient in the short term, but that creates a technical context in which the same work requires architectural rework and costs more to do later than it would cost to do now.”
Architectural technical debt may be baked into an application’s design before coding even starts. A good example is the fact that most legacy Java apps are structured as monoliths, meaning that the codebase is organized as a single, non-modularized unit that has functional implementations and dependencies interwoven throughout.
Because the app’s components are all together in one place and communicate directly through function calls, this architecture may at first appear less complex than, for example, an architecture built around independent microservices that communicate more indirectly through APIs or protocols such as HTTPS.
But the tight coupling between functions in monolithic code imposes severe limitations on the flexibility, adaptability, and scalability of the app. Because functions are so interconnected, even a small change to a single function could have unintended consequences elsewhere in the code. That makes updating monolithic apps difficult, time-consuming, and risky since any change has the potential to cause the entire app to fail in unanticipated ways.
Challenges of Managing Architectural Technical Debt
Not only may initial design decisions insert architectural technical debt into apps up front, but changes that occur over time, through a process known as architectural technical drift, can be an even more insidious driver of technical debt.
Architectural technical drift occurs when, to meet immediate needs or perhaps because requirements have changed, developers modify the code in ways that deviate from the planned architecture. The result is that over time the codebase diverges more and more from the architectural design specification.
What makes such drift so dangerous is that while designed-in architectural debt can be identified by comparing the design specification against modern best practices, the ad hoc changes inserted along the way by developers are typically documented poorly—if at all.
The result is that architects often have little visibility into the actual state of a legacy codebase since it no longer matches the architecture design specification. And that makes architectural technical debt very hard to identify and even harder to fix.
The problem is that while architects have a variety of tools for assessing code quality through, for example, static and dynamic analysis or measuring cyclomatic complexity (a metric that reveals how likely the code is to contain errors, and how hard it will be to test, troubleshoot, and maintain), they haven’t had comparable tools for assessing how an app’s architecture is evolving or drifting over time.
Why Measuring and Addressing Architectural Technical Debt is Critical
While code quality and complexity are key application health issues, architectural technical debt is an even higher level concern because unaddressed architectural deficiencies can make it very difficult, or even impossible, to upgrade apps to keep pace with the rapidly evolving requirements that define today’s cloud-centric technological environment.
For example, the monolithic architecture that characterizes most legacy Java codebases is notorious for having ingrained and intractable technical debt that imposes severe limitations on the maintainability, adaptability, and scalability of such apps.
But given the difficulty of detecting and measuring architectural technical debt, how can architects effectively address it to prevent it from eventually causing serious issues? As management guru Peter Drucker famously said, “You can’t improve what you don’t measure.”
The answer is by following a “shift left” QA strategy based on use of the advanced AI-based tools now available for detecting, measuring, monitoring, and remediating architectural debt and drift issues before they cause technical debt meltdowns.
Shifting Left to Address Architectural Technical Debt
In the traditional waterfall approach to software development, operational testing of apps comes near the end of the development cycle, usually as the last step before deployment. But architectural issues that come to light at that late stage are extremely difficult and costly to fix, and may significantly delay deployment of the app. The shift left approach originally aimed to alleviate that problem.
In essence, shift left moved QA toward the start of the development cycle—the technique gets its name from the fact that diagrams of the software development sequence typically place the initial phase on the left with succeeding phases added on the right. Ideally, the process begins, before any code is written, by assessing the architectural design to ensure it aligns with functional specifications and customer requirements.
Shift left is a fundamental element of Agile methodology, which emphasizes developing, testing, and delivering working software in small increments. Because the code delivered with each Agile iteration must function correctly, shift left testing allows verification of the design and performance of components such as APIs, containers, and microservices under realistic runtime conditions at each step of the development process.
In this context, shifting left for architecture gives senior engineers and architects visibility into architectural drift throughout the application lifecycle. It makes modernization a closed-loop process where architecture debt is always being proactively observed, tracked, baselined, and where anomalies are detected early enough to avoid disasters.
That’s especially beneficial for modernization efforts in which legacy apps are refactored from monoliths to a cloud-native microservices architecture. Since microservices are designed to function independently of one another, the shift left approach helps to ensure that all services integrate smoothly into the overall architecture and that any functional incompatibilities or communications issues are identified and addressed as soon as they appear.
The Importance of Continuous Modernization
One of the greatest benefits of legacy app modernization is that it substantially reduces technical debt. This is especially the case with monolithic apps—the process of refactoring them to a microservices architecture automatically eliminates most (though not necessarily all) of their technical debt.
But modernization isn’t a one-time process. Because of the rapid advances in technology and the quickly evolving competitive demands that characterize today’s business environment, from the moment an app is deployed it begins to fall behind the requirements curve and become out of date.
Plus, the urgency of those new requirements can put immense pressure on development and maintenance teams to get their products deployed as quickly as possible. That, in turn, often leads them to make “sub-optimal design or implementation solutions that yield a benefit in the short term.” And that, by definition, adds technical debt to even newly designed or modernized apps.
As a result, the technical debt of any app will inevitably increase over time. That’s not necessarily bad; it’s what you do about that accumulating debt that counts. Ward Cunningham, who coined the term “technical debt” in 1992, puts it this way:
“A little debt speeds development so long as it is paid back promptly with refactoring. The danger occurs when the debt is not repaid.”
That’s why continuous modernization is so critical. Without it, the technical debt carried by your company’s application portfolio is never repaid and will continue to increase until a business disaster of some kind becomes inevitable. As a recent Gartner report declares:
“Applications and software engineering leaders must create a continuous modernization culture. Every product or platform team must manage their technical debt, develop a modernization strategy and continuously modernize their products and platforms… Teams must ensure that they don’t fall victim to “drift” over time.”
Until recently, it’s been difficult for software development and engineering leaders to establish a culture of continuous modernization because they lacked the specialized tools needed for observing, tracking, and managing technical debt in general—and architectural technical debt in particular. But the recent advent of AI-based tools specially designed for that process has been a game changer. They enable software teams to identify architectural issues, understand their complexity, predict how much time and engineering effort will be required to fix them, and actually lead the team through the refactoring process.
The vFunction Continuous Modernization Manager enables architects to apply the shift left principle throughout the software development lifecycle to continuously identify, monitor, manage, and fix architectural technical debt problems. In particular, it enables users to pinpoint architectural technical drift issues and remediate them before they contribute to some future technical debt catastrophe.
In March 2023, Amazon.com published an article on how it rearchitected its Prime Video offering from a distributed microservices architecture to a ‘monolithic’ architecture running within a single Amazon Elastic Container Service (ECS) stack.
Despite a reduction in infrastructure cost of over 90%, the seemingly counterintuitive move generated consternation across the cloud architecture community. Monoliths are ‘bad,’ laden with technical debt, while microservices are ‘good,’ free from such debt, they trumpeted. How could Amazon make such a contrarian move?
This controversy centers on what people mean by ‘monolith,’ and why its connotation is so negative. In general parlance, a monolith is a pattern saddled with architectural debt – debt that the organization must pay back sooner or later. Based on this definition, an organization would be crazy to move from an architecture with less debt to one with more.
But as the Amazon story shows, there is more to this story – not only a clearer idea of the true nature of architectural monoliths, but also the fundamental concept of architectural debt.
Architectural Debt: A Particular Kind of Technical Debt
As I explained in an earlier article, technical debt represents some kind of technology mess that someone has to clean up. In many cases, technical debt results from poorly written code, but more often than not, is more a result of evolving requirements that existing technology simply cannot keep up with.
Architectural debt is a special kind of technical debt that indicates expedient, poorly constructed, or obsolete architecture.
Even more so than the more familiar source code-related technical debt, architectural debt is often a necessary and desirable characteristic of the software architecture. The reason: too much software architecture in the early phases of a software project can cause systemic problems for the initiative that lead to increased costs and a greater chance of project failure.
In fact, the problem of architecture that is too much and too early, aka ‘overdesign,’ is one of the primary weaknesses of the waterfall methodology.
Instead, modern software principles call for ‘just enough’ or ‘just in time’ architecture, expecting architects to spend the minimum time on the task necessary to guide the software effort. If a future iteration calls for more or different architecture, then the architect should perform the additional work at that time.
Good vs. Bad Architectural Debt
Given such principles, you’d think that Amazon’s move to a monolith would be better received.
After all, the reason Amazon’s architects chose microservices in the first place was because such a decision was expedient and didn’t require excessive architectural work. The move to a monolith was simply a necessary rearchitecture step in a subsequent iteration.
Where the confusion arose was over the difference between this ‘good’ type of architectural debt – intentional ‘just enough, just in time’ architecture as part of an iterative design – and the ‘bad’ type: older, legacy architectures that may have served their purpose at the time, but are now obsolete, leading to increased costs and limited flexibility.
Examples of Good vs. Bad Architectural Debt
It may be difficult to distinguish between the two types of architectural debt. To help clarify the differences, here are two examples.
Example #1: Addressing good architectural debt.
An organization is implementing an application which will eventually have a global user base. The architects consider whether to architect it to support internationalization but decide to put this task off in the interests of expediency.
Eventually the development team must rework the app to support internationalization – a task that takes longer than it would have had they architected the app to support it initially.
Nevertheless, the organization was able to put the application into production more quickly than if they had taken the time to internationalize it, thus bringing in revenue sooner and giving themselves more opportunity to figure out how they should improve the application.
Example #2: Addressing bad architectural debt.
An organization struggles with the limitations of its fifteen-year-old Java EE application, running on premises on, say, Oracle WebLogic. The app is now too inflexible to meet current business needs, and the organization would like to move the functionality to the cloud – a migration that WebLogic is poorly suited for.
The organization must first take inventory of their existing architecture, requiring architectural observability that can delineate the as-is architecture of the application, how it’s behaving in production, and what its most urgent problems are. The architecture team must also establish an architectural baseline and then determine how much the as-is architecture has drifted from it.
At that point, the organization must implement a modernization strategy that considers the technical debt inherent in the internal interdependencies among architectural elements (Java classes, objects, and methods in this case). Only then can it make informed modernization decisions for the overall architecture as well as the software components that make up the application.
Architectural observability from tools like the vFunction Architectural Observability Platform is essential for understanding and thus dealing with bad architectural debt. Such debt is difficult to identify and even more difficult to fix. In some cases, fixing architectural debt isn’t worth the trouble – but without architectural observability, you’ll never know which architectural debt you should address.
The Intellyx Take
The term ‘monolith’ is saddled with all the negative connotations of bad architectural debt, but as the Amazon example illustrates, such connotation paints the term with too wide a brush.
In reality, what constitutes a monolith has changed over time. Object-oriented techniques relegated procedural programs to the status of monolith. Today, cloud native architectures apply the label to the object-oriented applications of the Java EE days.
Understanding architectural debt, therefore, goes well beyond the labels people put on their architectures. With the proper visibility, architects can differentiate between good and bad architectural debt and thus begin the difficult but often necessary process of modernization in order to get a handle on their organization’s architectural debt.
In 2021, the Standish Group published a report on the effectiveness of different approaches to software modernization. The report looked at the efficacy of replacing legacy solutions with entirely new code or using existing components as a basis for modernization. The authors also identified an approach they called “Infinite Flow,” where modernization was a continuous process and not a project with a start and end date.
The Standish Group’s definition of infinite flow mimics that of continuous modernization (CM). CM is a continuous process of delivering software updates incrementally. It allows developers to replace legacy software iteratively with less user disruption. Both definitions focus on the ongoing nature of software delivery and its organizational impact.
According to their analysis, the report authors determined that continuous flow processes deliver more value than other methodologies, such as agile or waterfall. They calculated that waterfall projects are 80% overhead and only return a net value of 20%. In contrast, continuous modernization operates with 20% overhead and delivers an 80% net value. They also calculated that 60% of customers were disappointed in the software delivered at the end of a large development effort. In comparison, only 8% of customers felt the same with continuous development processes.
If continuous modernization delivers a more significant net value and increases customer satisfaction, why aren’t more organizations using the methodology as they replace legacy systems? Let’s take a closer look at this strategy to determine why more companies don’t realize the benefits of CM.
What is Continuous Modernization?
The Information Systems Audit and Control Association (ISACA) defines continuous modernization as a strategy to evolve an existing architecture continuously and incorporate emerging technologies in the core business operating model. With CM, organizations develop solutions in increments that encourage frequent releases where the software is monitored and refined, feeding back into the development cycle.
The approach allows companies to gradually replace aging technologies that pose business risks. It enables businesses to add features and functionality to existing systems without disrupting operations. However, CM is more than a development strategy. It is a mindset.
Traditional software development is project-based. A scope of work is defined with a start and end date. It doesn’t matter if the development method is waterfall or agile. Cumulative software updates are released on a pre-defined date. After installation, bugs may be identified and fixed. Some flaws are added to the scope of work for the next release.
With CM, on the other hand, software development becomes part of a continuous improvement mindset where each iteration enhances the existing software. New software is deployed monthly, weekly, or daily. Unlike project-based development, changes are not withheld until a project scope has been completed. The steady stream of updates requires a cultural shift.
Key performance indicators (KPIs) for software development and measurement methods no longer apply. Testing procedures are automated to keep pace with incremental software releases. End users see small changes in the user interface or functionality instead of massive changes delivered simultaneously. If organizations are to realize the following benefits of CM, they need to address the cultural changes necessary to support a continuous improvement model.
What Are the Benefits of CM?
The 2021 Standish authors indicated that the flow-like modernization methodology had the following benefits:
Modernization using a series of microprojects had better outcomes than a single large project.
Microprojects achieved greater customer satisfaction because of built-in feedback loops.
Microprojects delivered a higher return of value.
Modernization using continuous improvement reduced risk and monetary loss.
Continuous modernization has a higher degree of sustainable innovation.
Continuous modernization increases application life.
Outcomes were evaluated in terms of time, budget, and customer satisfaction. In general, smaller projects in a continuous improvement model delivered better outcomes than more traditional large projects, especially in the areas of customer satisfaction, net value, and financial loss.
Increased Customer Satisfaction
Continuous modernization is less disruptive to operations. When large projects are delivered, it often results in downtime. Even if the software is installed after hours, the massive changes usually require user training. Struggling to learn the software while performing their job frustrates employees.
Since most large projects do not solicit extensive user input during development, the updated features may not operate as users expected. Customers become disgruntled when they are told the feature operates as designed, so it isn’t a bug and won’t be addressed until the next release.
With microprojects, small changes are made incrementally with minimal impact on user operations. Employees aren’t trying to learn new functionalities while performing their job. Soliciting feedback from users as changes are deployed means modifications can be incorporated into the iterative process.
Reduced Risk
Old code is risky code. Who knows what is lurking in those undocumented modules? Depending on the age of the software, everyone associated with the original project may have left the company. Suddenly organizations are faced with a knowledge deficit. How can they support the software if no one understands the code?
Twitter is an excellent example of the impact technical debt and knowledge deficit can have on a company. Long before Elon Musk took over Twitter, employees complained that parts of the application were too brittle. Some even suggested that the technical debt was too extensive, requiring a complete rewrite. Then, Musk began his widespread staff reduction. As a result, fewer employees were available to keep brittle code functional.
In March 2023, Twitter had an operational failure. Users were unable to open links. The API that allowed the data interchange was not working. After service was restored, the source of the failure was found to be an employee error. The one engineer assigned to the API made an incorrect configuration change. Removing old code reduces the risk of a disastrous outcome from a simple configuration change.
Reduced Technical Debt
Technical debt is no different than financial debt. At some point, it must be repaid. If it goes untouched, it only accumulates until an organization is no longer viable. A recent survey found that technical debt inhibits 70% of a company’s ability to innovate.
CM allows developers to gradually replace legacy code that contributes to technical debt. It also keeps the debt from growing. For example, companies that release software updates once a year accumulate debt while they are writing new code. Given the exponential rate of digital adoption, the technical deficit can easily double in a year.
Following a continuous modernization approach, developers are consistently replacing older code. Because the incremental updates require less test time, new code can be delivered faster. Changes in methodology or coding standards can be incorporated into the cycling development list to minimize the amount of added technical debt.
Limited Monetary Loss
Continuous modernization incorporates user feedback into the development process. With feedback, developers can adjust the software to better reflect user needs. This process minimizes monetary loss that can result from a comprehensive software update.
Large development projects that follow the traditional management path consume significant resources before the end user sees the software. If the final product does not meet expectations, companies run the risk of bringing a product to market that lacks key features. Costs for reworking the software are added to the original expenditures. Businesses can find themselves selling solutions at a loss if the market will not support a price increase.
With large projects, the opportunity costs can be significant if resources are tied up reworking software after delivery. Instead of pursuing an innovative solution, developers are occupied with existing development. Iterative development allows for immediate feedback so course corrections can occur early in the development process. If the product fails to meet market expectations, organizations can terminate the effort before incurring significant losses.
Sustained Innovation
Adopting a continuous improvement mindset allows developers and architects to implement a continuous modernization methodology for software development. The process enables programmers, DevOps, and engineers to deliver innovative solutions as part of their workday.
The iterative approach lets developers test innovative solutions as early in the process as possible and receive user feedback to ensure acceptance. Freed from reworking existing code and compensating for technical debt, development staff can spend more time exploring opportunities.
Limiting financial loss and reducing risk from outdated code provide businesses with added resources to investigate new markets. With a cost-effective methodology for modernization, organizations can deliver innovative solutions that consistently meet customer expectations.
Realize the Benefits of Continuous Modernization
To realize the benefits of continuous modernization, businesses must establish and measure KPIs. They must look for tools that can refactor applications and assess technical debt.
vFunction’s Assessment Hub analyzes applications, identifies technical debt, and calculates its impact. The Modernization Hub helps architects transform monoliths into microservices. The newly released Continuous Modernization Manager lets architects shift left and address issues that could impede ongoing modernization. To see how we can help with your modernization project, request a demo today.
We’re excited to share that vFunction has been named in the Gartner 2023 Measure and Monitor Technical Debt With 5 Types of Tools. According to Gartner, “Growing technical debt negatively impacts feature delivery, quality, and predictability of software applications. Software engineering leaders should introduce tools to proactively measure and monitor technical debt at both the code level and the architectural level.”
As stated by Gartner in their introduction:
“Technical debt often goes undetected early in product development, and software engineering teams often deprioritize technical-debt remediation to instead focus on quickly delivering new features.
Eventually, technical debt accumulates to a critical mass. At this point, the software becomes unstable, customers become dissatisfied and the product fails. This leads to large cost overruns and potentially fatal consequences for organizations.
Software engineering leaders want to mitigate these risks. They want to understand how to measure and monitor technical debt, and which types of tools their teams can use. Use this research to guide your choice of tools for measuring and monitoring your technical debt at both the component or code level, and the software architecture level.”
Gartner further describes:
“Static code analysis tools cannot provide high-abstraction-level visibility to identify technical debt in the architecture. The code-and-component-level technical debt is usually the easiest type of debt to measure and pay down. At the same time, the architectural-level debt has a much higher impact on overall product quality, feature delivery lead time and other metrics. Unfortunately, it also takes the most effort to fix.”
Recognized as an Architecture Technical Debt Analysis Tool, vFunction analyzes, baselines, continuously observes, and helps fix architectural technical debt and drift problems before they can result in high profile business outages or shutdowns.
Newly launched, vFunction Architectural Observability Platform is designed to give application architects the observability, visibility, and tooling they need to understand, track, and manage architectural technical debt as it develops and grows over time. This shifts architectural observability left into the ongoing software development lifecycle from an architectural perspective to manage, monitor, and fix application architecture anomalies on an iterative, continuous basis.
In the report, Gartner’s recommends:
“To help their organizations to successfully measure and monitor technical debt, software engineering leaders should:
Avoid compromising code quality and application delivery by proactively measuring and monitoring technical debt at the code level.
Prevent time-consuming architectural rework by introducing tools to analyze architectural technical debt and monitor the amount of debt in their software architecture.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Note: Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Every software development project has three constraints—time, resources, and quality. Knowing how to balance them is at the core of delivering consistent success. A well-balanced project resembles an equilateral triangle where the same stability is available no matter which side forms the base.
Over time, even the most balanced software loses stability. New features are added, and old functionality is disabled. Developers come and go, reducing team continuity. Eventually, the equilateral triangle looks more like an isosceles with a significant amount of technical debt to manage. That’s when refactoring projects often enter the development process.
What is a Refactoring Project?
Refactoring enables software teams to re-architect applications and restructure code without altering its external behavior. It may involve replacing old components with newer solutions or using new tools or languages that improve performance. These projects make code easier to maintain by eliminating dead or duplicate code and complex dependencies.
Incorporating refactoring into the development process can also extend the life of an application, allowing it to live in different environments, such as the cloud. However, refactoring doesn’t always reshape code to a well-balanced equilateral triangle. Plenty of pitfalls exist that can derail a project, despite refactoring best practices. Let’s look at seven mistakes that can impact the outcome of an application refactoring project.
Mistake #1: Starting with the Database or User Interface
When modernizing a monolith, there are three tiers you can focus on: the user interface, the business logic, or the data layer. There’s a temptation to go for the easy wins and start with the user interface, but in the end, you may have a shinier user interface, but you are facing the same issues that triggered the modernization initiative in the first place: exploding technical debt, decreasing engineering velocity, rising infrastructure and licensing costs, and unmet business expectations.
On the other hand, the database layer is also a first target to modernize or replace due to escalating licensing and maintenance costs. It would feel great to decompose a monolithic database into smaller, cloud-native data stores using faster, cheaper open source-based alternatives or cloud-based data layer services. But unfortunately, that’s putting the cart before the horse. In order to break down a database effectively, you need to first decompose the business logic that uses the data services.
By decomposing and refactoring the business logic you can create microservices that eliminate cross-table database dependencies and pair new independent data stores with their relevant microservices. Likewise, it’s easier to build new micro-frontends for these independent microservices once they have been decomposed with crisp boundaries that minimize or eliminate interdependencies.
The final consideration is managing risk. Your data is gold and any changes to the data layer are super high risk. You should only change the database once, and only after you have decomposed the monolithic business logic into microservices with one data store per microservice.
Focusing on the business logic first optimizes microservice deployment to reduce dependencies and duplication. It ensures that the data layer is divided to deliver a reliable, flexible, and scalable design.
Mistake #2: Boiling the Ocean
Boiling the ocean means complicating a task to the point that it is impossible to achieve. Whether focusing on minutiae or allowing project creep, refactoring projects can quickly evolve into a mission impossible. Simplifying the steps makes it easier to control.
One common mistake in refactoring is trying to re-architect an entire application all at once. While a perfect, fully cloud-native architecture could be the long-term goal, a modernization best practice should be to select one or a small number of domains or functional areas in the monolith to refactor and move into microservices. These new services might be prioritized by their high business value, high costs, or shared platform value. Many very successful modernization projects only extract a key handful of services and leave the remaining monolith as is to
For example, instead of jumping into a more complex service-mesh topology first, take a more practical, interim step with a hub and spoke topology that centralizes traffic control, so messages coming to and from spokes go through the hub. The topology reduces misconfiguration errors and simplifies the deployment of security policies. It enables faster identification and correction of errors because of its consolidated control.
Trying to implement a full-mesh topology increases connections, complicating monitoring and troubleshooting efforts. Once comfortable with a simpler topology, then look at a service mesh. Taking a step-by-step approach prevents a mission-impossible scenario.
Mistake #3: Ignoring Common Code
Although refactoring for microservices encourages exclusive class creation, it also discourages re-inventing the wheel. If developers approach a refactoring project assuming that every class must be exclusive to a single service, they may end with an application full of duplicate code.
Instead, programmers should evaluate classes to determine which ones are used frequently. Putting frequently used code into shared or common libraries makes it easier to update and reduces the chances that different implementations may appear across the application.
However, common libraries can grow uncontrolled if there are no guidelines in place as to when and when not to turn a class into a shared library. Modernization platforms can detect common classes and help build rational and consistent common libraries. Intelligent modernization tooling can ensure common code is not ignored while minimizing the risk of a library monolith.
Mistake #4: Keeping Dead Code Alive
Unreachable dead code can be commonly detected by a variety of source code analysis techniques. The more dangerous form of dead code is code that is still reachable but is no longer used in production. This can be caused by functions that become obsolete, get replaced, or merely forgotten as new services are added. Using static and dynamic analysis, developers can identify reachable dead code or “zombie code” based on observability tooling that compares actual production and user access to static application structure.
This type of dead code exists because many coders are afraid to touch old code as they are unsure of what it does or what it was intended to do. Rather than risk disrupting the unknown, they let it continue. This is just another example of technical debt that piles up over time.
Mistake #5: Guessing on Exclusivity
Moving toward a microservice architecture means ensuring that application entities such as classes, beans, sockets, or transactions appear in only one microservice. In other words, every microservice performs a single function with clearly defined boundaries.
The decoupling of functionality allows developers to build, deploy, and scale applications independently. The concept enables faster deployments with lower risk than older monolithic applications. However, determining the level of exclusivity can be challenging.
Intelligent modernization tooling can analyze complex interdependencies and help design microservices that maximize exclusivity. Without automated tools, this is a long, manual, painstaking process that is not based on measurements and analytics but most often relies on trial and error.
Mistake #6. Forgetting the Architecture
Refactoring focuses on applications. How efficiently does the code accomplish its tasks? Is it meeting business requirements for agility, reliability, and resiliency? Without looking at the architecture, improvements may be limited. Static code analysis tools will help identify common code “smells,” but they ignore the architecture. And architectural technical debt is the biggest contributor to cost, slow engineering velocity, sluggish performance, and eventual application failures.
System architects lack the tools needed to answer questions regarding performance and drift. Until architectural constructs can be observed, tracked, and managed, no one can assess the impact it has on refactoring. Just like applications, architecture can accumulate technical debt.
Architectural components can grow into a web of class entanglements and long dependency chains. It can exhibit unexpected behavior as it drifts away from its original design. Unfortunately, without the right tools, technical debt can be hard to identify, let alone quantify.
Mistake #7. Modernizing the Wrong Application
Assessing whether you should modernize and refactor an application in the first place is the critical first step. Is the application still important to the business? Can it be more easily replaced by a SaaS or COTS alternative? Has the business changed so dramatically that I should just rewrite it? How much technical debt is the app carrying and how hard will it be to refactor?
Assessment tools that focus on architectural technical debt can help quantify project scope in terms of time, money, and resources. When deployed appropriately, refactoring can help project managers break down an overwhelming task into smaller efforts that can be delivered quickly.
Building an Equilateral Triangle
When software development teams successfully manage the three constraints of time, quality, and resources, they create a well-balanced solution that is delivered on time and within budget, containing the requested features and functionality. They have momentarily built an equilateral triangle.
Creating an Equilateral Triangle with Automation
With AI-powered tools, refactoring projects will accelerate. Java or .NET developers can refactor their monoliths, reduce technical debt, and create a continuous modernization culture. If you’re interested in avoiding refactoring pitfalls, schedule a vFunction demo to see how we can help.
To keep pace in their marketplace, many businesses today are attempting to modernize their business and the legacy apps they depend on by moving them to the cloud. But experience has shown that migrating workloads requires a structured framework to guide developers and IT staffers in this new environment. That’s what the cloud center of excellence (CCOE) is all about. According to Gartner, a CCOE is the optimal way to ensure cloud success.
Simply lifting and shifting legacy software as-is to the cloud still leaves you with a monolith and merely changes the location of your problem. Most legacy apps can run fine in the cloud but can’t take advantage of today’s cloud native ecosystem and managed services, and moving them unchanged to the cloud does little to fix their issues. That’s why application modernization, which restructures apps to give them cloud-native capabilities, must be an essential component of any sound cloud strategy.
Application modernization can itself be a complex and difficult process: the historical failure rate for such projects is 74%. But by incorporating a specific application modernization focus into your CCOE, you can avoid common pitfalls, enforce best practices, and lay a firm foundation for success.
“A Cloud Center of Excellence (CCoE) is a cross-functional team of people responsible for developing and managing the cloud strategy, governance, and best practices that the rest of the organization can leverage to transform the business using the cloud.”
The cloud center of excellence guides the entire organization in developing and executing its approach to the cloud. According to the AWS definition, the CCOE has three main responsibilities:
1. Cloud strategy
Your cloud strategy outlines the general approach, ground rules, and tools your organization will use in moving software and workflows to the cloud. It defines the business outcomes you want to achieve and establishes the technical standards and guidelines you’ll follow, taking into account issues such as costs vs benefits, risks, organizational capabilities, and legal or compliance requirements.
2. Governance
Red Hat defines cloud governance as “the process of defining, implementing, and monitoring a framework of policies that guides an organization’s cloud operations.” A governance regime will include specific rules and guidelines that aim at minimizing complexity and risk by defining how individuals and teams in your organization use cloud resources.
3. Best practices
Cloud best practices often differ substantially from those developed in on-site data centers. So, a fundamental part of a CCOE’s responsibility is to introduce developers and IT staffers to practices that are optimized for the cloud environment.
The Importance of Application Modernization
Because today’s market environment is highly dynamic, companies must be able to quickly respond to changes in customer requirements or other aspects of the competitive landscape. But legacy software, by its very nature, is difficult to adapt to meet new requirements.
Legacy apps are typically monolithic in structure (the codebase is organized as a single unit of perhaps millions of lines of code with complex dependencies interwoven throughout). As a result, such apps are usually hard for modern developers to understand and can be so brittle that even small changes might introduce downstream issues that bring the entire system to a screeching halt.
Plus, because these older apps were normally designed to operate as a closed system, integrating them into modern interdependent cloud managed services can be difficult and complex.
But many organizations still depend on these technologically ancient apps for some of their most business-critical processing, so they can’t simply be retired. The alternative is to modernize them by refactoring the monolithic code into a cloud-native, microservices architecture.
The effect of that kind of modernization (as contrasted with simply moving apps to the cloud with little change) is to give you a suite of re-architected applications that, while maintaining continuity for users, have the flexibility to be easily updated to meet new business requirements.
Why App Modernization Should Be a Core Competency of Your CCOE
Your cloud center of excellence should be your organization’s acknowledged authority on all things cloud. And application modernization is all about restructuring your legacy apps so that they can integrate smoothly into the cloud ecosystem.
Refactoring legacy apps to a cloud-native architecture is an inherently complex process that demands a high degree of expertise in architecture, design, cloud technology and operations. That’s why it’s critical that your CCOE also function as an MCOE (modernization center of excellence). Otherwise, your modernization efforts are very likely to struggle, and you stand a good chance of adding a percentage point or two to that 74% of app modernization projects that fail to meet their goals.
Not only will your CCOE/MCOE provide the fundamental cloud-related technical expertise and guidelines that underpin any successful effort to translate workflows to the cloud, but it must also help reshape your entire IT organization to fit the technological and operational requirements of the cloud environment.
For example, when a company’s modernization efforts lack the guidance and governance that an MCOE should provide, the organization is very likely to run afoul of Conway’s Law. This maxim, formulated in 1967 by software expert Melvin Conway, declares that:
Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.
The practical effect of Conway’s Law is that to be effective, your modernization teams must be restructured to reflect a whole new set of technological and organizational imperatives imposed by the cloud environment. In other words, to successfully refactor legacy apps to a microservices architecture you should reorganize your development teams based on the way cloud-based microservices work. Neglecting to restructure your development organization based on specific cloud-native technological patterns is an almost sure recipe for failure. As software engineer Alex Kondov so graphically puts it:
“You can’t fight Conway’s Law… Time and time again, when a company decides that it doesn’t apply to them they learn a hard lesson… If the company’s structure doesn’t change, the software will slowly evolve into something that mirrors it.”
Reshaping your entire IT operation (and by extension your organization as a whole) should not be undertaken lightly. It should only be done based upon authoritative guidance provided by a team that has an acknowledged depth of experience and expertise; in other words, a well-established and respected CCOE/MCOE.
Implementing a CCOE/MCOE is an Emerging Best Practice for Successful Companies
Today more and more companies are recognizing the critical necessity of having an effective CCOE/MCOE organization to guide their modernization efforts.
For example, an IDC report relates the experience of a large logistics company that failed three times in its efforts to move applications and workflows to the cloud. But it succeeded on its fourth attempt “when it created a multi-persona cloud center of excellence team responsible for architecture, automation, governance, operations, and unified delivery model.”
This experience is far from unique—other well-known companies, such as Dow Jones, have reported similar success stories. So, it’s not surprising that in a 2022 survey of corporate cloud infrastructure stakeholders an impressive 90% of respondents said they either have or plan to institute a cloud center of excellence. According to Computerworld, 64% of SMBs (small and medium-sized businesses) have already implemented CCOE-like teams.
Next Steps: Create or Upgrade Your MCOE
Ideally, you should have a CCOE/MCOE organization in place from the very beginning of your company’s modernization journey. But even if you’ve already started without an MCOE, it’s critical for long-term success that you initiate one as soon as possible.
If you already have an established CCOE/MCOE, you’ll want to focus on ensuring that it has the requisite skills, expertise, experience, mandate, and perhaps most important, management backing to provide authoritative leadership for your organization.
If, on the other hand, you have not yet instituted an MCOE (or an MCOE focus within your CCOE), now’s the time to put one in place. But how do you get started?
Getting Started
Whether you’re starting or upgrading your CCOE/MCOE, there are a couple of essential steps you should take.
The first and most important step is to ensure that your company’s executive management is visibly committed to the program. The CCOE/MCOE team will not only require budget and staffing resources, but must also have clear authority to set and enforce uniform technical and operational guidelines that apply to all cloud and modernization initiatives across the organization.
Then you must assemble and train your team, ensuring that it either has or can tap into the highest levels of cloud-related technical skills. Remember that your CCOE/MCOE team must not only be able to provide authoritative guidance concerning industry-wide technical best practices, but must do so within the context of your organization’s unique culture, goals, and cloud strategy.
But if your company is like most, you’re likely to discover that your in-house staff simply doesn’t possess all the experience and skills required to build a CCOE/MCOE that can be effective at providing expert cloud guidance and governance companywide. The best way to ensure that your team can tap into all the technical skills and tools it needs is to partner with another company that specializes in cloud and modernization technologies.
vFunction not only offers industry-leading experience and expertise in cloud-based application modernization, but also provides an advanced, automated modernization platform that can deliver data-based assessments of the modernization requirements of your legacy apps, and then substantially automate the process of refactoring those apps into microservices.
If you’re ready to take the next step in creating or upgrading your CCOE/MCOE team, vFunction can help. Please contact us today.
vFunction today launched the Continuous Modernization Manager (CMM), a new product for architects and developers to continuously monitor, detect, and pinpoint application architecture drift issues before they cause technical debt meltdowns. vFunction CMM enables software architects to shift left and prevent technical debt disasters by baselining, observing, pinpointing, and alerting on application architecture drift issues before they result in business catastrophes like we’ve seen with Southwest Airlines, Twitter, FAA, and countless, unnamed others. Read the full press release.
Architectural Technical Debt
Architectural technical debt accumulates unobserved in the shadows until disaster strikes – literally a silent killer for business. Application architects, up to this point, have lacked the architectural observability, visibility, tooling to understand, track, and manage architectural technical debt. This has resulted in not only technical problems such as architectural drift and erosion but numerous large and small disasters.
So what is architectural technical debt? It’s the accumulation of architectural components, decisions, and drift that results in “a big ball of mud” that architects are unable to see or track – making it essentially an opaque “black box.” Architectural technical debt consists of class entanglements, deep dependencies, dead-code, long dependency chains, dense topologies, and lack of common code libraries. Architectural debt is NOT source code quality or cyclomatic complexity, although these are critical technical debt elements to track and manage.
Architectural technical debt is hard to find and harder to fix. It affects product quality, feature delivery lead time, testing times, and very importantly it is the primary predictor of modernization complexity – how hard will it be to modernize (refactor or re-architect) this application. Peter Drucker established one of the most basic business principles when he stated, “You can’t improve what you can’t measure.” He also emphasized that you can’t stop at measurement, you need to also manage it. Architectural debt has been hard to measure thus hard to find and fix. Your need to observe the architecture, baseline, and detect architectural drift and then apply intelligent modernization tooling & techniques to manage the architectural anomalies.
“One of the most critical risks facing organizations today is architectural technical debt,” said Jason Bloomberg, Managing Partner of analyst firm Intellyx. “The best way to keep such debt from building up over time is to adopt Continuous Modernization as an essential best practice. By measuring and managing architectural technical debt, software engineering teams can catch architectural drift early and target modernization efforts more precisely and efficiently.”
Architectural Observability
Observable architecture is the goal. Today, architects lack the observability, visibility, tooling to understand, track, and manage architectural technical debt. They are looking to answer questions like:
What is the actual architecture of my monolith?
How is it behaving in production?
What’s my architectural baseline?
Has the app architecture drifted from the norm?
Do I have a major architecture issue I need to fix now?
Where is it and how do I fix it?
If I can’t identify my core services, their key dependencies, my common classes, or my highest debt classes, and the relevant service exclusivity, I’m running blind to the software development lifecycle from an architectural perspective.
Shift Left for Architects
vFunction Continuous Modernization Manager lights up the application black boxes and ball of mud apps – making the opaque transparent – so architects can shift left into the ongoing software development lifecycle from an architectural perspective. This allows them to manage, monitor, and fix application architecture anomalies on an iterative, continuous basis before they blow up into bigger issues. CMM observes Java and .NET applications and services to first baseline the architecture, set baselines, and monitor for architectural drift and erosion to detect critical architectural anomalies including:
New Dead Code Found: detect emerging dead code in applications indicating that unnecessary code has surfaced in the application or the baseline architecture drifted and existing class or resource dependencies were changed.
New Service Introduced: Based on the observed baseline service topology, when a new service has been detected vFunction will identify and alert that a new domain or major architectural event has occurred.
New Common Classes Found: Building a stable, shared common library is a critical modernization best practice to reduce duplicate code and dependencies. Newly identified common classes can be added to a common library to prevent further technical debt from building up.
Service Exclusivity Dropped: vFunction measures and baselines service exclusivity to determine the the percentage of independent classes and resources of a service, alerting when new dependencies are introduced that expand architectural technical debt.
New High-Debt Classes Identified: vFunction identifies the highest technical debt classes that are the highest contributors to application complexity. A “high-debt” class score is determined by its dependents, dependencies, and size and pinpoints a critical software component that should be refactored or re-architected.
Users will be notified of changes in the architecture through Slack, email, and vFunction Notifications Center. Through vFunction Continuous Modernization Manager, architects will be able to configure schedules for learning, analysis and the option to configure baseline measurements.
New in Modernization Hub and Assessment Hub
In addition, the newest release of vFunction Modernization Hub has added advanced collaboration capabilities to enable modernization architects and teams to more easily work together. New analytics also pinpoint the highest technical debt classes to focus the refactoring priorities. vFunction Assessment Hub has added a new Multi-Application Assessment Dashboard to analyze technical debt across a broad application portfolio.
Newly announced, vFunction Assessment Hub now includes a Multi-Application Assessment Dashboard that tracks and compares different parameters for 100’s of applications. Multiple applications can be analyzed at a glance for technical debt, aging frameworks, complexity, state, and additional architectural factors.
Also new in vFunction Modernization Hub 3.0 are a set of collaboration features for modernization teams to more effectively collaborate to avoid conflicts and improve clarity, working in parallel on different measurements, and then later merging them into one measurement. A user can protect services they wish to keep unmodified, preventing conflicts when multiple teams are working on the same measurement, especially when adding or removing classes to common areas.
Modernization is Not Linear: It’s a Continuous Best Practice
The most important takeaway from this announcement is that modernization is not a one-and-done project. It needs to be an iterative, cyclical best practice process that requires teams to adopt and commit to a culture of continuous measurement and improvement – led by architects shifting-left into their development processes and taking firm ownership of their actual architectures. Create observable architecture through architectural observability and tooling that catches architectural drift before it leads to greater issues. We’ve all seen what can happen if you let the silent killer of architectural technical debt continue to lurk in the shadows. Shine a light on it, find it, fix it, and prevent future monoliths from ever forming again.
Technical debt. It’s a term most people have never heard of. But over the holiday season of 2022, thousands of air travelers became personally acquainted, much to their dismay, with the disastrous impact a failure to modernize critical applications and eliminate technical debt can have on companies and their customers.
Starting on December 2, 2022, Southwest Airlines was forced to cancel almost 17,000 flights, shutting down two-thirds of its operations during one of the busiest travel seasons of the year. How could such a catastrophe happen?
The Southwest Airlines Shutdown Debacle
Year-end winter storms covered much of the country with snow, ice, bitter cold, and high winds, forcing all the nation’s major airlines to scramble to adjust their flight schedules and aircrew assignments. Most did so without inflicting severe inconveniences on their passengers. But the story at Southwest was different. The carrier had to cancel 59% of its flights while other major airlines canceled only 3% of theirs.
The difference was software. As Matt Ashare, a reporter writing for CIO Dive succinctly put it,
“Outdated technology was at the heart of the Southwest meltdown.”
Southwest manages its flight and crew scheduling using an application called SkySolver, which the company admits is nearing its end of life. This system first went into service in 2004, the same year that Facebook and Gmail were introduced.
But unlike those franchises, which are still going strong, Southwest failed to keep its SkySolver implementation up to date as airline flight volumes surged 69% between 2004 and 2019. Although Southwest’s version of SkySolver was specifically designed to handle aircrew scheduling issues, when severe weather caused nationwide flight disruptions the system was overwhelmed. As a result, when flight schedules were scrambled aircrew members were forced to resort to manual procedures to inform the airline of their whereabouts and get reassigned.
With 20,000 frontline employees trying to coordinate their activities through phone calls, text messages, and emails, Southwest found itself unable to track the whereabouts of its pilots and flight attendants and match them with planes. It had no choice but to shut down most of its operations.
How Technical Debt Shut Southwest Airlines Down
The Southwest shutdown provides a classic case study of the impact technical debt can have on a company that neglects to deal with it in a timely fashion.
What is technical debt? The term was coined by computer scientist Ward Cunningham in 1992 to describe what happens when software teams take shortcuts that seem expedient in the short term, but that result in higher remediation costs in the future.
That aptly describes what happened at Southwest. The company grew quickly and was considered a forward-looking innovator in the airline industry, particularly in the area of customer experience. And it wasn’t adverse to investing in its technology. Earlier in 2022 the company announced a plan to spend $2 billion on customer experience upgrades.
The problem was that Southwest focused so intently on investment priorities that seemed to promise immediate earnings rewards, such as customer experience improvements, that it failed to address the needs of the software that ran its own internal operations. Ironically, that choice resulted in what’s probably the worst customer experience fiasco in company history.
It’s not that Southwest was unaware that its crew scheduling application was in desperate need of attention. The pilots union had been loudly complaining about the issue since 2015. In fact, in 2016 the pilots and aircraft mechanics unions went so far as to proclaim votes of no confidence in then-CEO Gary Kelly due to his “inability to prioritize the expenditure of record-breaking revenues toward investments in critically outdated IT infrastructure and flight operations.”
“Southwest managed its technology portfolio as a near-term cost instead of a long-running investment driving growth. This misguided approach resulted in an unmanageable level of technical debt exposing Southwest in the most public way possible — the failure to deliver an acceptable customer experience.”
Because of its failure to address technical debt in its crew scheduling software, Southwest took an immense hit to both its reputation and, with total losses from the disruption ranging upwards of $1 billion, to its bottom line. Even worse, it probably lost some customers it will never regain because they no longer trust Southwest’s ability to smoothly handle potentially disruptive events.
Even the FAA Isn’t Immune to Technical Debt-Related Glitches
The U.S. government agency that oversees airlines, the Federal Aviation Administration (FAA), experienced its own embarrassing shutdown of flights just weeks after the Southwest debacle.
Because of a failure in its 30-year-old Notice to Air Missions (NOTAM) system, which provides critical flight safety and other information to pilots, the FAA was forced to cancel departures nationwide for a short period of time. Geoff Freeman, president and chief executive of the U.S. Travel Association issued a statement saying,
“Today’s FAA catastrophic system failure is a clear sign that America’s transportation network desperately needs significant upgrades.”
No Organization Is Safe As Long As It Ignores Technical Debt
These cases illustrate a fundamental reality that every business leader should be aware of: any organization that depends on software for its operations but neglects to deal with its technical debt makes itself vulnerable to being suddenly pitched into a crisis that can ruin both its reputation and its bottom line. If you want to shield your company from the potential of suffering similar disruptions, you need to be proactive about dealing with technical debt.
There’s an old saying that goes, “If it ain’t broke, don’t fix it.” That, apparently, was the attitude of executives at Southwest concerning their flight crew scheduling software. Although flight crews complained about the SkySolver system, the software’s deficiencies didn’t seem to be having any direct negative impact on customer experience. So dealing with the application’s all-too-evident technical debt issue remained low on the company’s priority list.
But failing to invest in eliminating technical debt from critical systems because those systems seem to be working acceptably at the moment is a high-stakes gamble. As CIO Dive’s Matt Ashare says,
“The bill for tech debt rarely arrives on a good day… Systems tend to fail when stressed, not when conditions are optimal. Waiting for a bad situation to pay down technical debt is a high-risk strategy.”
Business leaders should also consider that the longer technical debt is allowed to remain in their systems, the greater the cost of fixing it when that task finally becomes unavoidable.
For example, Southwest now says that it’s committed to upgrading SkySolver. But the software’s current vendor, GE Flight Efficiency Services, says there have been eight update releases in just the last year. That means that Southwest is presumably at least eight releases behind. And to make matters worse, SkySolver is an off-the-shelf package that each airline optimizes for its own operations. Integrating those eight or more upgrades with Southwest’s previous modifications is almost certain to be a time-consuming, costly, and risky endeavor.
Are your company’s systems burdened with a load of technical debt? Unless you don’t depend on software at all, or are already proactive in addressing and reducing technical debt, the answer to that question is very likely, “yes.”
Your biggest hindrance in dealing with technical debt may well be simple inertia. Remember that technical debt continues to grow as long as it’s in place, and so will the difficulty and cost of fixing it. If you wait until some sudden, urgent, and possibly very public crisis, such as the one Southwest had to suffer through, forces you to address your technical debt issue, both the costs and the risks of fixing the problem will multiply.
One thing to remember is that even recently created software may have technical debt issues if developers cut corners to get it released more quickly. As William Aimone, Managing Director at Trenegy explains,
“Technical debt is the result of issues that accrue over time due to previously rushed or poorly executed IT projects. Teams often implement an easy or quick IT solution to save money, get something released quickly, or meet deadlines, but it always comes back to bite.”
So, you shouldn’t take it for granted that because the software you depend on has all its updates installed, it must be free of technical debt.
Getting Started
Dealing proactively with technical debt needs to be a continuous best practice and become part of your development culture. To be successful, you’ll need both the right expertise and the right tools. Most businesses simply haven’t applied the time and resources to address technical debt and can benefit from new skills, tools, and guidance.
A good first step in dealing with your technical debt issue would be to consult with an expert partner that can come alongside you and help guide you on the journey toward freedom from technical debt.
vFunction not only provides experience and expertise in dealing with technical debt but an advanced application modernization platform that can help you assess where you stand with regard to technical debt. It can then substantially automate the process of refactoring your applications to modernize them and eliminate technical debt, propelling your organization towards a more efficient and competitive future.
If you’d like to know more about how you can deal with your organization’s technical debt, contact vFunction today to see how we can help.
When Watts Humphrey stated that every business is a software business, organizations realized that their survival depended on software. Today, developers also need to view cybersecurity as part of their responsibilities. It’s not enough to add security as an afterthought.
According to Hiscox’s 2022 report, many organizations are using the US National Institute of Science and Technology’s (NIST) SP800-160 standard as a blueprint for strengthening security defenses. Part of that standard offers a framework for incorporating security measures into the development process.
Patching security weaknesses after release is a little like shutting the barn door after the animals have escaped. Developers chase after the illusive vulnerability, trying to corral and correct it. No matter how hard they try, developers can’t make an existing system as secure as one built with security best practices in mind.
When modernizing legacy systems, developers often adopt a microservices architecture. However, making that the default choice means ignoring the associated security risks. They must assess the potential risks and mitigation methods of monolithic vs. microservice designs to determine the most secure implementation.
Security Risks: Microservices vs. Monoliths
Security, like time, is relative. Is a monolith application less secure than microservices? Not always. For example, a simple monolith application with a small attack surface may be more secure than the same application using microservices.
An attack surface is a set of points on the boundary of a system where bad actors can gain access. A monolith application often has a smaller attack service than its microservice-based counterpart.
That said, attack surfaces are not the only security concerns facing developers as they look to incorporate security into the design of an application. Other areas to consider include coupling, authentication, and containerization.
Security Concern #1: Coupling vs. Decoupling
Legacy software may have thousands of lines of code wrapped into a single application. The individual components are interconnected, creating a tightly coupled piece of software. Microservices, by design, are loosely coupled. Each service is self-contained, resulting in fewer dependencies.
When bad actors compromise monoliths, they gain access to the entire application. The damage can be catastrophic. With microservices, a single compromise does not guarantee access to multiple services.
Once exploitation is detected, it can take months to contain. IBM’s latest Cost of a Data Breach report found that the average time to containment was 75 days. The shorter the data breach lifecycle, the lower the cost.
Given the inherent coupling of a monolith, finding the vulnerability can be challenging, especially if dead code has accumulated. The discrete nature of microservices makes it easier for programmers to locate a possible breach, reducing its lifecycle length and associated costs.
Security Concern #2: Attack Surface Sizes
As mentioned above, attack surfaces are points on the boundary of a system where an unauthorized user can gain access. The larger the boundary, the higher the risk. While current methods may default to microservices, they may not be the most secure architecture in every instance.
For example, an application with a few modules will have a smaller attack surface than the multiple microservices required to deliver the same functionality. The individual surfaces would be smaller, but the total surface of the application would be larger.
At the same time, a monolith application can become difficult to manage if it becomes too complex. Most legacy monoliths are complex, with multiple functions, modules, and subroutines. Developers must weigh attack surfaces against complexity when designing an application.
When a vulnerability is identified, it may take hours or even days to locate and patch the weakness in a monolithic application. Microservices are discrete components that enable programmers to find and correct flaws quickly.
Security Concern #3: Authentication Complexity
Monoliths use one-and-done authentication. Since accessing different resources occurs within the same application, identifying the requesting source in each module is redundant. However, that same approach shouldn’t be applied when migrating to a microservices design.
Microservices communicate through application programming interfaces, called APIs, when they need access to another microservice. Every request is an opportunity for compromise. That’s why microservices must incorporate authentication and authorization functionality in their design.
Adding this level of security as an afterthought creates its own set of vulnerabilities. Ensuring that each microservice has an authentication code in place can be challenging, depending on the number of services. If multiple developers are involved, implementation can vary. Finally, programmers from a monolith environment may overlook the requirement if it’s not part of their coding mindset.
Making an application less vulnerable is an essential feature of security by design. Application designs should include robust authentication and authorization code, whether monolith or microservices. Developers should consider a zero-trust implementation that requires continuous verification.
Security Concern #4: Container Weaknesses
Moving applications into containers provides portability, fewer resources, and consistent operation. Both microservices and monoliths can operate in containers. However, containerized environments add another layer of security, provided they are managed correctly. Common security weaknesses include privileges, images, and visibility. Any application running in a container—whether monolith or microservice—shares these risks.
Privileges
Containers often run as users with root privileges because it minimizes potential permission conflicts. When containerized applications need access to resources within the container, developers do not need to worry about installation or read/write failures because of permissions.
However, running containers with root privileges elevates security risks. If the container is compromised, cybercriminals have access to everything in the container. Developers must consider using a rootless implementation or a least-privilege model to restrict access for both microservice and monolithic applications.
Images
A secure pipeline for containerized application images is essential for both monoliths and microservices. Using secured private registries and fixed image tags can reduce the risk of a container’s contents being compromised. Once an image is in production, the security risk increases exponentially.
Visibility
Tracking weaknesses during a container’s lifecycle can mitigate security risks for monoliths and microservices. Developers can deploy scanning and analysis tools to look for code vulnerabilities. They can also use tools for visibility into open-source components or applications.
In 2021, visibility concerns resulted in the federal government issuing scanning requirements for containers. The document outlines the tools needed to assess the container pipeline and images. The guidelines also recommend real-time container monitoring.
Security Concern #6: Monitoring Complexity
Runtime visibility is another security risk. Applications should include event logging and monitoring to record any potential threats. Alerts should be part of any visibility tool so unusual behaviors can be assessed.
Monoliths often have real-time logging in place. This feature was added to help troubleshoot problems in highly complex applications. Writing error messages to a log with identifiers can significantly reduce the time needed to research a weakness and create a fix.
Putting real-time monitoring in place for microservices is far more time-consuming. Logging programs are not written for one large application but for many smaller applications. Many development teams skimp on or even skip monitoring because each microservice is so small it will be easy to find a problem. Unfortunately, in the midst of an attack, it’s rarely easy to find the weakness.
Security By Design
Although improved cybersecurity may not be the motivating factor behind modernizing legacy software, it is an opportunity that should not be wasted. Recent white-hat efforts by Positive Technologies found that 93% of attacks were successful in breaching a company’s internal network. Their selected targets were taken from finance, fuel and energy, government, industry/manufacturing, and IT industries.
Compromised credentials, including administrator passwords, were successfully used in 71% of attacks. The company was able to exploit vulnerabilities in software (60% ) and web (40%) applications. Their efforts highlight the need for strengthening security in deployed applications, whether they are monoliths or microservices.
Security can no longer be an afterthought when it comes to software design. Every developer needs to look at their code through a cybersecurity lens. Neither architecture is perfect, but developers must weigh their advantages and disadvantages to ensure a secure application.
To improve application security, consider including security professionals and using automated tools during development.
Security Professionals. If an organization has access to security professionals, use them. They can identify methods and tactics that cybercriminals use to compromise systems. With this knowledge, applications can be designed with security in mind.
Automated Tools. Tools exist to help with migrating legacy applications, securing code under development, and monitoring performance in production. These tools can help developers decide which architecture is appropriate for a given application and facilitate making it as secure as possible.
Just as every company realizes how essential software is to their survival, developers need to acknowledge that cybersecurity must be part of their toolset.
vFunction’s modernization platform for Java applications provides the tools needed to migrate legacy applications. Our Modernization Hub helps move monoliths to microservices and uses AI-based tools to track behaviors. The Hub also performs static code inspection of binaries. These resources make it possible for developers to spend more time ensuring that security protocols and best practices are incorporated as part of the design. Request a demo to learn more about how vFunction can help with your modernization needs.