We’re excited to share that vFunction has been named in the Gartner 2023 Measure and Monitor Technical Debt With 5 Types of Tools. According to Gartner, “Growing technical debt negatively impacts feature delivery, quality, and predictability of software applications. Software engineering leaders should introduce tools to proactively measure and monitor technical debt at both the code level and the architectural level.”
As stated by Gartner in their introduction:
“Technical debt often goes undetected early in product development, and software engineering teams often deprioritize technical-debt remediation to instead focus on quickly delivering new features.
Eventually, technical debt accumulates to a critical mass. At this point, the software becomes unstable, customers become dissatisfied and the product fails. This leads to large cost overruns and potentially fatal consequences for organizations.
Software engineering leaders want to mitigate these risks. They want to understand how to measure and monitor technical debt, and which types of tools their teams can use. Use this research to guide your choice of tools for measuring and monitoring your technical debt at both the component or code level, and the software architecture level.”
Gartner further describes:
“Static code analysis tools cannot provide high-abstraction-level visibility to identify technical debt in the architecture. The code-and-component-level technical debt is usually the easiest type of debt to measure and pay down. At the same time, the architectural-level debt has a much higher impact on overall product quality, feature delivery lead time and other metrics. Unfortunately, it also takes the most effort to fix.”
Recognized as an Architecture Technical Debt Analysis Tool, vFunction analyzes, baselines, continuously observes, and helps fix architectural technical debt and drift problems before they can result in high profile business outages or shutdowns.
Newly launched, vFunction Architectural Observability Platform is designed to give application architects the observability, visibility, and tooling they need to understand, track, and manage architectural technical debt as it develops and grows over time. This shifts architectural observability left into the ongoing software development lifecycle from an architectural perspective to manage, monitor, and fix application architecture anomalies on an iterative, continuous basis.
In the report, Gartner’s recommends:
“To help their organizations to successfully measure and monitor technical debt, software engineering leaders should:
Avoid compromising code quality and application delivery by proactively measuring and monitoring technical debt at the code level.
Prevent time-consuming architectural rework by introducing tools to analyze architectural technical debt and monitor the amount of debt in their software architecture.
GARTNER is a registered trademark and service mark of Gartner, Inc. and/or its affiliates in the U.S. and internationally and is used herein with permission. All rights reserved.
Note: Gartner does not endorse any vendor, product or service depicted in its research publications, and does not advise technology users to select only those vendors with the highest ratings or other designation. Gartner research publications consist of the opinions of Gartner’s research organization and should not be construed as statements of fact. Gartner disclaims all warranties, expressed or implied, with respect to this research, including any warranties of merchantability or fitness for a particular purpose.
Every software development project has three constraints—time, resources, and quality. Knowing how to balance them is at the core of delivering consistent success. A well-balanced project resembles an equilateral triangle where the same stability is available no matter which side forms the base.
Over time, even the most balanced software loses stability. New features are added, and old functionality is disabled. Developers come and go, reducing team continuity. Eventually, the equilateral triangle looks more like an isosceles with a significant amount of technical debt to manage. That’s when refactoring projects often enter the development process.
What is a Refactoring Project?
Refactoring enables software teams to re-architect applications and restructure code without altering its external behavior. It may involve replacing old components with newer solutions or using new tools or languages that improve performance. These projects make code easier to maintain by eliminating dead or duplicate code and complex dependencies.
Incorporating refactoring into the development process can also extend the life of an application, allowing it to live in different environments, such as the cloud. However, refactoring doesn’t always reshape code to a well-balanced equilateral triangle. Plenty of pitfalls exist that can derail a project, despite refactoring best practices. Let’s look at seven mistakes that can impact the outcome of an application refactoring project.
Mistake #1: Starting with the Database or User Interface
When modernizing a monolith, there are three tiers you can focus on: the user interface, the business logic, or the data layer. There’s a temptation to go for the easy wins and start with the user interface, but in the end, you may have a shinier user interface, but you are facing the same issues that triggered the modernization initiative in the first place: exploding technical debt, decreasing engineering velocity, rising infrastructure and licensing costs, and unmet business expectations.
On the other hand, the database layer is also a first target to modernize or replace due to escalating licensing and maintenance costs. It would feel great to decompose a monolithic database into smaller, cloud-native data stores using faster, cheaper open source-based alternatives or cloud-based data layer services. But unfortunately, that’s putting the cart before the horse. In order to break down a database effectively, you need to first decompose the business logic that uses the data services.
By decomposing and refactoring the business logic you can create microservices that eliminate cross-table database dependencies and pair new independent data stores with their relevant microservices. Likewise, it’s easier to build new micro-frontends for these independent microservices once they have been decomposed with crisp boundaries that minimize or eliminate interdependencies.
The final consideration is managing risk. Your data is gold and any changes to the data layer are super high risk. You should only change the database once, and only after you have decomposed the monolithic business logic into microservices with one data store per microservice.
Focusing on the business logic first optimizes microservice deployment to reduce dependencies and duplication. It ensures that the data layer is divided to deliver a reliable, flexible, and scalable design.
Mistake #2: Boiling the Ocean
Boiling the ocean means complicating a task to the point that it is impossible to achieve. Whether focusing on minutiae or allowing project creep, refactoring projects can quickly evolve into a mission impossible. Simplifying the steps makes it easier to control.
One common mistake in refactoring is trying to re-architect an entire application all at once. While a perfect, fully cloud-native architecture could be the long-term goal, a modernization best practice should be to select one or a small number of domains or functional areas in the monolith to refactor and move into microservices. These new services might be prioritized by their high business value, high costs, or shared platform value. Many very successful modernization projects only extract a key handful of services and leave the remaining monolith as is to
For example, instead of jumping into a more complex service-mesh topology first, take a more practical, interim step with a hub and spoke topology that centralizes traffic control, so messages coming to and from spokes go through the hub. The topology reduces misconfiguration errors and simplifies the deployment of security policies. It enables faster identification and correction of errors because of its consolidated control.
Trying to implement a full-mesh topology increases connections, complicating monitoring and troubleshooting efforts. Once comfortable with a simpler topology, then look at a service mesh. Taking a step-by-step approach prevents a mission-impossible scenario.
Mistake #3: Ignoring Common Code
Although refactoring for microservices encourages exclusive class creation, it also discourages re-inventing the wheel. If developers approach a refactoring project assuming that every class must be exclusive to a single service, they may end with an application full of duplicate code.
Instead, programmers should evaluate classes to determine which ones are used frequently. Putting frequently used code into shared or common libraries makes it easier to update and reduces the chances that different implementations may appear across the application.
However, common libraries can grow uncontrolled if there are no guidelines in place as to when and when not to turn a class into a shared library. Modernization platforms can detect common classes and help build rational and consistent common libraries. Intelligent modernization tooling can ensure common code is not ignored while minimizing the risk of a library monolith.
Mistake #4: Keeping Dead Code Alive
Unreachable dead code can be commonly detected by a variety of source code analysis techniques. The more dangerous form of dead code is code that is still reachable but is no longer used in production. This can be caused by functions that become obsolete, get replaced, or merely forgotten as new services are added. Using static and dynamic analysis, developers can identify reachable dead code or “zombie code” based on observability tooling that compares actual production and user access to static application structure.
This type of dead code exists because many coders are afraid to touch old code as they are unsure of what it does or what it was intended to do. Rather than risk disrupting the unknown, they let it continue. This is just another example of technical debt that piles up over time.
Mistake #5: Guessing on Exclusivity
Moving toward a microservice architecture means ensuring that application entities such as classes, beans, sockets, or transactions appear in only one microservice. In other words, every microservice performs a single function with clearly defined boundaries.
The decoupling of functionality allows developers to build, deploy, and scale applications independently. The concept enables faster deployments with lower risk than older monolithic applications. However, determining the level of exclusivity can be challenging.
Intelligent modernization tooling can analyze complex interdependencies and help design microservices that maximize exclusivity. Without automated tools, this is a long, manual, painstaking process that is not based on measurements and analytics but most often relies on trial and error.
Mistake #6. Forgetting the Architecture
Refactoring focuses on applications. How efficiently does the code accomplish its tasks? Is it meeting business requirements for agility, reliability, and resiliency? Without looking at the architecture, improvements may be limited. Static code analysis tools will help identify common code “smells,” but they ignore the architecture. And architectural technical debt is the biggest contributor to cost, slow engineering velocity, sluggish performance, and eventual application failures.
System architects lack the tools needed to answer questions regarding performance and drift. Until architectural constructs can be observed, tracked, and managed, no one can assess the impact it has on refactoring. Just like applications, architecture can accumulate technical debt.
Architectural components can grow into a web of class entanglements and long dependency chains. It can exhibit unexpected behavior as it drifts away from its original design. Unfortunately, without the right tools, technical debt can be hard to identify, let alone quantify.
Mistake #7. Modernizing the Wrong Application
Assessing whether you should modernize and refactor an application in the first place is the critical first step. Is the application still important to the business? Can it be more easily replaced by a SaaS or COTS alternative? Has the business changed so dramatically that I should just rewrite it? How much technical debt is the app carrying and how hard will it be to refactor?
Assessment tools that focus on architectural technical debt can help quantify project scope in terms of time, money, and resources. When deployed appropriately, refactoring can help project managers break down an overwhelming task into smaller efforts that can be delivered quickly.
Building an Equilateral Triangle
When software development teams successfully manage the three constraints of time, quality, and resources, they create a well-balanced solution that is delivered on time and within budget, containing the requested features and functionality. They have momentarily built an equilateral triangle.
Creating an Equilateral Triangle with Automation
With AI-powered tools, refactoring projects will accelerate. Java or .NET developers can refactor their monoliths, reduce technical debt, and create a continuous modernization culture. If you’re interested in avoiding refactoring pitfalls, schedule a vFunction demo to see how we can help.
To keep pace in their marketplace, many businesses today are attempting to modernize their business and the legacy apps they depend on by moving them to the cloud. But experience has shown that migrating workloads requires a structured framework to guide developers and IT staffers in this new environment. That’s what the cloud center of excellence (CCOE) is all about. According to Gartner, a CCOE is the optimal way to ensure cloud success.
Simply lifting and shifting legacy software as-is to the cloud still leaves you with a monolith and merely changes the location of your problem. Most legacy apps can run fine in the cloud but can’t take advantage of today’s cloud native ecosystem and managed services, and moving them unchanged to the cloud does little to fix their issues. That’s why application modernization, which restructures apps to give them cloud-native capabilities, must be an essential component of any sound cloud strategy.
Application modernization can itself be a complex and difficult process: the historical failure rate for such projects is 74%. But by incorporating a specific application modernization focus into your CCOE, you can avoid common pitfalls, enforce best practices, and lay a firm foundation for success.
“A Cloud Center of Excellence (CCoE) is a cross-functional team of people responsible for developing and managing the cloud strategy, governance, and best practices that the rest of the organization can leverage to transform the business using the cloud.”
The cloud center of excellence guides the entire organization in developing and executing its approach to the cloud. According to the AWS definition, the CCOE has three main responsibilities:
1. Cloud strategy
Your cloud strategy outlines the general approach, ground rules, and tools your organization will use in moving software and workflows to the cloud. It defines the business outcomes you want to achieve and establishes the technical standards and guidelines you’ll follow, taking into account issues such as costs vs benefits, risks, organizational capabilities, and legal or compliance requirements.
2. Governance
Red Hat defines cloud governance as “the process of defining, implementing, and monitoring a framework of policies that guides an organization’s cloud operations.” A governance regime will include specific rules and guidelines that aim at minimizing complexity and risk by defining how individuals and teams in your organization use cloud resources.
3. Best practices
Cloud best practices often differ substantially from those developed in on-site data centers. So, a fundamental part of a CCOE’s responsibility is to introduce developers and IT staffers to practices that are optimized for the cloud environment.
The Importance of Application Modernization
Because today’s market environment is highly dynamic, companies must be able to quickly respond to changes in customer requirements or other aspects of the competitive landscape. But legacy software, by its very nature, is difficult to adapt to meet new requirements.
Legacy apps are typically monolithic in structure (the codebase is organized as a single unit of perhaps millions of lines of code with complex dependencies interwoven throughout). As a result, such apps are usually hard for modern developers to understand and can be so brittle that even small changes might introduce downstream issues that bring the entire system to a screeching halt.
Plus, because these older apps were normally designed to operate as a closed system, integrating them into modern interdependent cloud managed services can be difficult and complex.
But many organizations still depend on these technologically ancient apps for some of their most business-critical processing, so they can’t simply be retired. The alternative is to modernize them by refactoring the monolithic code into a cloud-native, microservices architecture.
The effect of that kind of modernization (as contrasted with simply moving apps to the cloud with little change) is to give you a suite of re-architected applications that, while maintaining continuity for users, have the flexibility to be easily updated to meet new business requirements.
Why App Modernization Should Be a Core Competency of Your CCOE
Your cloud center of excellence should be your organization’s acknowledged authority on all things cloud. And application modernization is all about restructuring your legacy apps so that they can integrate smoothly into the cloud ecosystem.
Refactoring legacy apps to a cloud-native architecture is an inherently complex process that demands a high degree of expertise in architecture, design, cloud technology and operations. That’s why it’s critical that your CCOE also function as an MCOE (modernization center of excellence). Otherwise, your modernization efforts are very likely to struggle, and you stand a good chance of adding a percentage point or two to that 74% of app modernization projects that fail to meet their goals.
Not only will your CCOE/MCOE provide the fundamental cloud-related technical expertise and guidelines that underpin any successful effort to translate workflows to the cloud, but it must also help reshape your entire IT organization to fit the technological and operational requirements of the cloud environment.
For example, when a company’s modernization efforts lack the guidance and governance that an MCOE should provide, the organization is very likely to run afoul of Conway’s Law. This maxim, formulated in 1967 by software expert Melvin Conway, declares that:
Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization’s communication structure.
The practical effect of Conway’s Law is that to be effective, your modernization teams must be restructured to reflect a whole new set of technological and organizational imperatives imposed by the cloud environment. In other words, to successfully refactor legacy apps to a microservices architecture you should reorganize your development teams based on the way cloud-based microservices work. Neglecting to restructure your development organization based on specific cloud-native technological patterns is an almost sure recipe for failure. As software engineer Alex Kondov so graphically puts it:
“You can’t fight Conway’s Law… Time and time again, when a company decides that it doesn’t apply to them they learn a hard lesson… If the company’s structure doesn’t change, the software will slowly evolve into something that mirrors it.”
Reshaping your entire IT operation (and by extension your organization as a whole) should not be undertaken lightly. It should only be done based upon authoritative guidance provided by a team that has an acknowledged depth of experience and expertise; in other words, a well-established and respected CCOE/MCOE.
Implementing a CCOE/MCOE is an Emerging Best Practice for Successful Companies
Today more and more companies are recognizing the critical necessity of having an effective CCOE/MCOE organization to guide their modernization efforts.
For example, an IDC report relates the experience of a large logistics company that failed three times in its efforts to move applications and workflows to the cloud. But it succeeded on its fourth attempt “when it created a multi-persona cloud center of excellence team responsible for architecture, automation, governance, operations, and unified delivery model.”
This experience is far from unique—other well-known companies, such as Dow Jones, have reported similar success stories. So, it’s not surprising that in a 2022 survey of corporate cloud infrastructure stakeholders an impressive 90% of respondents said they either have or plan to institute a cloud center of excellence. According to Computerworld, 64% of SMBs (small and medium-sized businesses) have already implemented CCOE-like teams.
Next Steps: Create or Upgrade Your MCOE
Ideally, you should have a CCOE/MCOE organization in place from the very beginning of your company’s modernization journey. But even if you’ve already started without an MCOE, it’s critical for long-term success that you initiate one as soon as possible.
If you already have an established CCOE/MCOE, you’ll want to focus on ensuring that it has the requisite skills, expertise, experience, mandate, and perhaps most important, management backing to provide authoritative leadership for your organization.
If, on the other hand, you have not yet instituted an MCOE (or an MCOE focus within your CCOE), now’s the time to put one in place. But how do you get started?
Getting Started
Whether you’re starting or upgrading your CCOE/MCOE, there are a couple of essential steps you should take.
The first and most important step is to ensure that your company’s executive management is visibly committed to the program. The CCOE/MCOE team will not only require budget and staffing resources, but must also have clear authority to set and enforce uniform technical and operational guidelines that apply to all cloud and modernization initiatives across the organization.
Then you must assemble and train your team, ensuring that it either has or can tap into the highest levels of cloud-related technical skills. Remember that your CCOE/MCOE team must not only be able to provide authoritative guidance concerning industry-wide technical best practices, but must do so within the context of your organization’s unique culture, goals, and cloud strategy.
But if your company is like most, you’re likely to discover that your in-house staff simply doesn’t possess all the experience and skills required to build a CCOE/MCOE that can be effective at providing expert cloud guidance and governance companywide. The best way to ensure that your team can tap into all the technical skills and tools it needs is to partner with another company that specializes in cloud and modernization technologies.
vFunction not only offers industry-leading experience and expertise in cloud-based application modernization, but also provides an advanced, automated modernization platform that can deliver data-based assessments of the modernization requirements of your legacy apps, and then substantially automate the process of refactoring those apps into microservices.
If you’re ready to take the next step in creating or upgrading your CCOE/MCOE team, vFunction can help. Please contact us today.
vFunction today launched the Continuous Modernization Manager (CMM), a new product for architects and developers to continuously monitor, detect, and pinpoint application architecture drift issues before they cause technical debt meltdowns. vFunction CMM enables software architects to shift left and prevent technical debt disasters by baselining, observing, pinpointing, and alerting on application architecture drift issues before they result in business catastrophes like we’ve seen with Southwest Airlines, Twitter, FAA, and countless, unnamed others. Read the full press release.
Architectural Technical Debt
Architectural technical debt accumulates unobserved in the shadows until disaster strikes – literally a silent killer for business. Application architects, up to this point, have lacked the architectural observability, visibility, tooling to understand, track, and manage architectural technical debt. This has resulted in not only technical problems such as architectural drift and erosion but numerous large and small disasters.
So what is architectural technical debt? It’s the accumulation of architectural components, decisions, and drift that results in “a big ball of mud” that architects are unable to see or track – making it essentially an opaque “black box.” Architectural technical debt consists of class entanglements, deep dependencies, dead-code, long dependency chains, dense topologies, and lack of common code libraries. Architectural debt is NOT source code quality or cyclomatic complexity, although these are critical technical debt elements to track and manage.
Architectural technical debt is hard to find and harder to fix. It affects product quality, feature delivery lead time, testing times, and very importantly it is the primary predictor of modernization complexity – how hard will it be to modernize (refactor or re-architect) this application. Peter Drucker established one of the most basic business principles when he stated, “You can’t improve what you can’t measure.” He also emphasized that you can’t stop at measurement, you need to also manage it. Architectural debt has been hard to measure thus hard to find and fix. Your need to observe the architecture, baseline, and detect architectural drift and then apply intelligent modernization tooling & techniques to manage the architectural anomalies.
“One of the most critical risks facing organizations today is architectural technical debt,” said Jason Bloomberg, Managing Partner of analyst firm Intellyx. “The best way to keep such debt from building up over time is to adopt Continuous Modernization as an essential best practice. By measuring and managing architectural technical debt, software engineering teams can catch architectural drift early and target modernization efforts more precisely and efficiently.”
Architectural Observability
Observable architecture is the goal. Today, architects lack the observability, visibility, tooling to understand, track, and manage architectural technical debt. They are looking to answer questions like:
What is the actual architecture of my monolith?
How is it behaving in production?
What’s my architectural baseline?
Has the app architecture drifted from the norm?
Do I have a major architecture issue I need to fix now?
Where is it and how do I fix it?
If I can’t identify my core services, their key dependencies, my common classes, or my highest debt classes, and the relevant service exclusivity, I’m running blind to the software development lifecycle from an architectural perspective.
Shift Left for Architects
vFunction Continuous Modernization Manager lights up the application black boxes and ball of mud apps – making the opaque transparent – so architects can shift left into the ongoing software development lifecycle from an architectural perspective. This allows them to manage, monitor, and fix application architecture anomalies on an iterative, continuous basis before they blow up into bigger issues. CMM observes Java and .NET applications and services to first baseline the architecture, set baselines, and monitor for architectural drift and erosion to detect critical architectural anomalies including:
New Dead Code Found: detect emerging dead code in applications indicating that unnecessary code has surfaced in the application or the baseline architecture drifted and existing class or resource dependencies were changed.
New Service Introduced: Based on the observed baseline service topology, when a new service has been detected vFunction will identify and alert that a new domain or major architectural event has occurred.
New Common Classes Found: Building a stable, shared common library is a critical modernization best practice to reduce duplicate code and dependencies. Newly identified common classes can be added to a common library to prevent further technical debt from building up.
Service Exclusivity Dropped: vFunction measures and baselines service exclusivity to determine the the percentage of independent classes and resources of a service, alerting when new dependencies are introduced that expand architectural technical debt.
New High-Debt Classes Identified: vFunction identifies the highest technical debt classes that are the highest contributors to application complexity. A “high-debt” class score is determined by its dependents, dependencies, and size and pinpoints a critical software component that should be refactored or re-architected.
Users will be notified of changes in the architecture through Slack, email, and vFunction Notifications Center. Through vFunction Continuous Modernization Manager, architects will be able to configure schedules for learning, analysis and the option to configure baseline measurements.
New in Modernization Hub and Assessment Hub
In addition, the newest release of vFunction Modernization Hub has added advanced collaboration capabilities to enable modernization architects and teams to more easily work together. New analytics also pinpoint the highest technical debt classes to focus the refactoring priorities. vFunction Assessment Hub has added a new Multi-Application Assessment Dashboard to analyze technical debt across a broad application portfolio.
Newly announced, vFunction Assessment Hub now includes a Multi-Application Assessment Dashboard that tracks and compares different parameters for 100’s of applications. Multiple applications can be analyzed at a glance for technical debt, aging frameworks, complexity, state, and additional architectural factors.
Also new in vFunction Modernization Hub 3.0 are a set of collaboration features for modernization teams to more effectively collaborate to avoid conflicts and improve clarity, working in parallel on different measurements, and then later merging them into one measurement. A user can protect services they wish to keep unmodified, preventing conflicts when multiple teams are working on the same measurement, especially when adding or removing classes to common areas.
Modernization is Not Linear: It’s a Continuous Best Practice
The most important takeaway from this announcement is that modernization is not a one-and-done project. It needs to be an iterative, cyclical best practice process that requires teams to adopt and commit to a culture of continuous measurement and improvement – led by architects shifting-left into their development processes and taking firm ownership of their actual architectures. Create observable architecture through architectural observability and tooling that catches architectural drift before it leads to greater issues. We’ve all seen what can happen if you let the silent killer of architectural technical debt continue to lurk in the shadows. Shine a light on it, find it, fix it, and prevent future monoliths from ever forming again.
Technical debt. It’s a term most people have never heard of. But over the holiday season of 2022, thousands of air travelers became personally acquainted, much to their dismay, with the disastrous impact a failure to modernize critical applications and eliminate technical debt can have on companies and their customers.
Starting on December 2, 2022, Southwest Airlines was forced to cancel almost 17,000 flights, shutting down two-thirds of its operations during one of the busiest travel seasons of the year. How could such a catastrophe happen?
The Southwest Airlines Shutdown Debacle
Year-end winter storms covered much of the country with snow, ice, bitter cold, and high winds, forcing all the nation’s major airlines to scramble to adjust their flight schedules and aircrew assignments. Most did so without inflicting severe inconveniences on their passengers. But the story at Southwest was different. The carrier had to cancel 59% of its flights while other major airlines canceled only 3% of theirs.
The difference was software. As Matt Ashare, a reporter writing for CIO Dive succinctly put it,
“Outdated technology was at the heart of the Southwest meltdown.”
Southwest manages its flight and crew scheduling using an application called SkySolver, which the company admits is nearing its end of life. This system first went into service in 2004, the same year that Facebook and Gmail were introduced.
But unlike those franchises, which are still going strong, Southwest failed to keep its SkySolver implementation up to date as airline flight volumes surged 69% between 2004 and 2019. Although Southwest’s version of SkySolver was specifically designed to handle aircrew scheduling issues, when severe weather caused nationwide flight disruptions the system was overwhelmed. As a result, when flight schedules were scrambled aircrew members were forced to resort to manual procedures to inform the airline of their whereabouts and get reassigned.
With 20,000 frontline employees trying to coordinate their activities through phone calls, text messages, and emails, Southwest found itself unable to track the whereabouts of its pilots and flight attendants and match them with planes. It had no choice but to shut down most of its operations.
How Technical Debt Shut Southwest Airlines Down
The Southwest shutdown provides a classic case study of the impact technical debt can have on a company that neglects to deal with it in a timely fashion.
What is technical debt? The term was coined by computer scientist Ward Cunningham in 1992 to describe what happens when software teams take shortcuts that seem expedient in the short term, but that result in higher remediation costs in the future.
That aptly describes what happened at Southwest. The company grew quickly and was considered a forward-looking innovator in the airline industry, particularly in the area of customer experience. And it wasn’t adverse to investing in its technology. Earlier in 2022 the company announced a plan to spend $2 billion on customer experience upgrades.
The problem was that Southwest focused so intently on investment priorities that seemed to promise immediate earnings rewards, such as customer experience improvements, that it failed to address the needs of the software that ran its own internal operations. Ironically, that choice resulted in what’s probably the worst customer experience fiasco in company history.
It’s not that Southwest was unaware that its crew scheduling application was in desperate need of attention. The pilots union had been loudly complaining about the issue since 2015. In fact, in 2016 the pilots and aircraft mechanics unions went so far as to proclaim votes of no confidence in then-CEO Gary Kelly due to his “inability to prioritize the expenditure of record-breaking revenues toward investments in critically outdated IT infrastructure and flight operations.”
“Southwest managed its technology portfolio as a near-term cost instead of a long-running investment driving growth. This misguided approach resulted in an unmanageable level of technical debt exposing Southwest in the most public way possible — the failure to deliver an acceptable customer experience.”
Because of its failure to address technical debt in its crew scheduling software, Southwest took an immense hit to both its reputation and, with total losses from the disruption ranging upwards of $1 billion, to its bottom line. Even worse, it probably lost some customers it will never regain because they no longer trust Southwest’s ability to smoothly handle potentially disruptive events.
Even the FAA Isn’t Immune to Technical Debt-Related Glitches
The U.S. government agency that oversees airlines, the Federal Aviation Administration (FAA), experienced its own embarrassing shutdown of flights just weeks after the Southwest debacle.
Because of a failure in its 30-year-old Notice to Air Missions (NOTAM) system, which provides critical flight safety and other information to pilots, the FAA was forced to cancel departures nationwide for a short period of time. Geoff Freeman, president and chief executive of the U.S. Travel Association issued a statement saying,
“Today’s FAA catastrophic system failure is a clear sign that America’s transportation network desperately needs significant upgrades.”
No Organization Is Safe As Long As It Ignores Technical Debt
These cases illustrate a fundamental reality that every business leader should be aware of: any organization that depends on software for its operations but neglects to deal with its technical debt makes itself vulnerable to being suddenly pitched into a crisis that can ruin both its reputation and its bottom line. If you want to shield your company from the potential of suffering similar disruptions, you need to be proactive about dealing with technical debt.
There’s an old saying that goes, “If it ain’t broke, don’t fix it.” That, apparently, was the attitude of executives at Southwest concerning their flight crew scheduling software. Although flight crews complained about the SkySolver system, the software’s deficiencies didn’t seem to be having any direct negative impact on customer experience. So dealing with the application’s all-too-evident technical debt issue remained low on the company’s priority list.
But failing to invest in eliminating technical debt from critical systems because those systems seem to be working acceptably at the moment is a high-stakes gamble. As CIO Dive’s Matt Ashare says,
“The bill for tech debt rarely arrives on a good day… Systems tend to fail when stressed, not when conditions are optimal. Waiting for a bad situation to pay down technical debt is a high-risk strategy.”
Business leaders should also consider that the longer technical debt is allowed to remain in their systems, the greater the cost of fixing it when that task finally becomes unavoidable.
For example, Southwest now says that it’s committed to upgrading SkySolver. But the software’s current vendor, GE Flight Efficiency Services, says there have been eight update releases in just the last year. That means that Southwest is presumably at least eight releases behind. And to make matters worse, SkySolver is an off-the-shelf package that each airline optimizes for its own operations. Integrating those eight or more upgrades with Southwest’s previous modifications is almost certain to be a time-consuming, costly, and risky endeavor.
Are your company’s systems burdened with a load of technical debt? Unless you don’t depend on software at all, or are already proactive in addressing and reducing technical debt, the answer to that question is very likely, “yes.”
Your biggest hindrance in dealing with technical debt may well be simple inertia. Remember that technical debt continues to grow as long as it’s in place, and so will the difficulty and cost of fixing it. If you wait until some sudden, urgent, and possibly very public crisis, such as the one Southwest had to suffer through, forces you to address your technical debt issue, both the costs and the risks of fixing the problem will multiply.
One thing to remember is that even recently created software may have technical debt issues if developers cut corners to get it released more quickly. As William Aimone, Managing Director at Trenegy explains,
“Technical debt is the result of issues that accrue over time due to previously rushed or poorly executed IT projects. Teams often implement an easy or quick IT solution to save money, get something released quickly, or meet deadlines, but it always comes back to bite.”
So, you shouldn’t take it for granted that because the software you depend on has all its updates installed, it must be free of technical debt.
Getting Started
Dealing proactively with technical debt needs to be a continuous best practice and become part of your development culture. To be successful, you’ll need both the right expertise and the right tools. Most businesses simply haven’t applied the time and resources to address technical debt and can benefit from new skills, tools, and guidance.
A good first step in dealing with your technical debt issue would be to consult with an expert partner that can come alongside you and help guide you on the journey toward freedom from technical debt.
vFunction not only provides experience and expertise in dealing with technical debt but an advanced application modernization platform that can help you assess where you stand with regard to technical debt. It can then substantially automate the process of refactoring your applications to modernize them and eliminate technical debt, propelling your organization towards a more efficient and competitive future.
If you’d like to know more about how you can deal with your organization’s technical debt, contact vFunction today to see how we can help.
When Watts Humphrey stated that every business is a software business, organizations realized that their survival depended on software. Today, developers also need to view cybersecurity as part of their responsibilities. It’s not enough to add security as an afterthought.
According to Hiscox’s 2022 report, many organizations are using the US National Institute of Science and Technology’s (NIST) SP800-160 standard as a blueprint for strengthening security defenses. Part of that standard offers a framework for incorporating security measures into the development process.
Patching security weaknesses after release is a little like shutting the barn door after the animals have escaped. Developers chase after the illusive vulnerability, trying to corral and correct it. No matter how hard they try, developers can’t make an existing system as secure as one built with security best practices in mind.
When modernizing legacy systems, developers often adopt a microservices architecture. However, making that the default choice means ignoring the associated security risks. They must assess the potential risks and mitigation methods of monolithic vs. microservice designs to determine the most secure implementation.
Security Risks: Microservices vs. Monoliths
Security, like time, is relative. Is a monolith application less secure than microservices? Not always. For example, a simple monolith application with a small attack surface may be more secure than the same application using microservices.
An attack surface is a set of points on the boundary of a system where bad actors can gain access. A monolith application often has a smaller attack service than its microservice-based counterpart.
That said, attack surfaces are not the only security concerns facing developers as they look to incorporate security into the design of an application. Other areas to consider include coupling, authentication, and containerization.
Security Concern #1: Coupling vs. Decoupling
Legacy software may have thousands of lines of code wrapped into a single application. The individual components are interconnected, creating a tightly coupled piece of software. Microservices, by design, are loosely coupled. Each service is self-contained, resulting in fewer dependencies.
When bad actors compromise monoliths, they gain access to the entire application. The damage can be catastrophic. With microservices, a single compromise does not guarantee access to multiple services.
Once exploitation is detected, it can take months to contain. IBM’s latest Cost of a Data Breach report found that the average time to containment was 75 days. The shorter the data breach lifecycle, the lower the cost.
Given the inherent coupling of a monolith, finding the vulnerability can be challenging, especially if dead code has accumulated. The discrete nature of microservices makes it easier for programmers to locate a possible breach, reducing its lifecycle length and associated costs.
Security Concern #2: Attack Surface Sizes
As mentioned above, attack surfaces are points on the boundary of a system where an unauthorized user can gain access. The larger the boundary, the higher the risk. While current methods may default to microservices, they may not be the most secure architecture in every instance.
For example, an application with a few modules will have a smaller attack surface than the multiple microservices required to deliver the same functionality. The individual surfaces would be smaller, but the total surface of the application would be larger.
At the same time, a monolith application can become difficult to manage if it becomes too complex. Most legacy monoliths are complex, with multiple functions, modules, and subroutines. Developers must weigh attack surfaces against complexity when designing an application.
When a vulnerability is identified, it may take hours or even days to locate and patch the weakness in a monolithic application. Microservices are discrete components that enable programmers to find and correct flaws quickly.
Security Concern #3: Authentication Complexity
Monoliths use one-and-done authentication. Since accessing different resources occurs within the same application, identifying the requesting source in each module is redundant. However, that same approach shouldn’t be applied when migrating to a microservices design.
Microservices communicate through application programming interfaces, called APIs, when they need access to another microservice. Every request is an opportunity for compromise. That’s why microservices must incorporate authentication and authorization functionality in their design.
Adding this level of security as an afterthought creates its own set of vulnerabilities. Ensuring that each microservice has an authentication code in place can be challenging, depending on the number of services. If multiple developers are involved, implementation can vary. Finally, programmers from a monolith environment may overlook the requirement if it’s not part of their coding mindset.
Making an application less vulnerable is an essential feature of security by design. Application designs should include robust authentication and authorization code, whether monolith or microservices. Developers should consider a zero-trust implementation that requires continuous verification.
Security Concern #4: Container Weaknesses
Moving applications into containers provides portability, fewer resources, and consistent operation. Both microservices and monoliths can operate in containers. However, containerized environments add another layer of security, provided they are managed correctly. Common security weaknesses include privileges, images, and visibility. Any application running in a container—whether monolith or microservice—shares these risks.
Privileges
Containers often run as users with root privileges because it minimizes potential permission conflicts. When containerized applications need access to resources within the container, developers do not need to worry about installation or read/write failures because of permissions.
However, running containers with root privileges elevates security risks. If the container is compromised, cybercriminals have access to everything in the container. Developers must consider using a rootless implementation or a least-privilege model to restrict access for both microservice and monolithic applications.
Images
A secure pipeline for containerized application images is essential for both monoliths and microservices. Using secured private registries and fixed image tags can reduce the risk of a container’s contents being compromised. Once an image is in production, the security risk increases exponentially.
Visibility
Tracking weaknesses during a container’s lifecycle can mitigate security risks for monoliths and microservices. Developers can deploy scanning and analysis tools to look for code vulnerabilities. They can also use tools for visibility into open-source components or applications.
In 2021, visibility concerns resulted in the federal government issuing scanning requirements for containers. The document outlines the tools needed to assess the container pipeline and images. The guidelines also recommend real-time container monitoring.
Security Concern #6: Monitoring Complexity
Runtime visibility is another security risk. Applications should include event logging and monitoring to record any potential threats. Alerts should be part of any visibility tool so unusual behaviors can be assessed.
Monoliths often have real-time logging in place. This feature was added to help troubleshoot problems in highly complex applications. Writing error messages to a log with identifiers can significantly reduce the time needed to research a weakness and create a fix.
Putting real-time monitoring in place for microservices is far more time-consuming. Logging programs are not written for one large application but for many smaller applications. Many development teams skimp on or even skip monitoring because each microservice is so small it will be easy to find a problem. Unfortunately, in the midst of an attack, it’s rarely easy to find the weakness.
Security By Design
Although improved cybersecurity may not be the motivating factor behind modernizing legacy software, it is an opportunity that should not be wasted. Recent white-hat efforts by Positive Technologies found that 93% of attacks were successful in breaching a company’s internal network. Their selected targets were taken from finance, fuel and energy, government, industry/manufacturing, and IT industries.
Compromised credentials, including administrator passwords, were successfully used in 71% of attacks. The company was able to exploit vulnerabilities in software (60% ) and web (40%) applications. Their efforts highlight the need for strengthening security in deployed applications, whether they are monoliths or microservices.
Security can no longer be an afterthought when it comes to software design. Every developer needs to look at their code through a cybersecurity lens. Neither architecture is perfect, but developers must weigh their advantages and disadvantages to ensure a secure application.
To improve application security, consider including security professionals and using automated tools during development.
Security Professionals. If an organization has access to security professionals, use them. They can identify methods and tactics that cybercriminals use to compromise systems. With this knowledge, applications can be designed with security in mind.
Automated Tools. Tools exist to help with migrating legacy applications, securing code under development, and monitoring performance in production. These tools can help developers decide which architecture is appropriate for a given application and facilitate making it as secure as possible.
Just as every company realizes how essential software is to their survival, developers need to acknowledge that cybersecurity must be part of their toolset.
vFunction’s modernization platform for Java applications provides the tools needed to migrate legacy applications. Our Modernization Hub helps move monoliths to microservices and uses AI-based tools to track behaviors. The Hub also performs static code inspection of binaries. These resources make it possible for developers to spend more time ensuring that security protocols and best practices are incorporated as part of the design. Request a demo to learn more about how vFunction can help with your modernization needs.
ChatGPT, the most advanced conversational AI chatbot yet publicly revealed, is taking the world by storm. Millions of ordinary people are using it, and most are highly enthusiastic about its ability to create human-like written content that helps them in their daily lives. Software professionals, too, are taking note of this new kid on the AI block. For them ChatGPT is a portal into a future in which AI-augmented software engineering will inevitably disrupt traditional approaches to coding, maintaining, and updating the software applications modern businesses depend on.
“ChatGPT Is a Tipping Point for AI … The ability to produce text and code on command means people are capable of producing more work, faster than ever before… This is a very big deal. The businesses that understand the significance of this change — and act on it first — will be at a considerable advantage.”
In this article, we’ll use ChatGPT as an up-to-the-minute example of what AI-augmented software engineering can accomplish.
What Is AI-Augmented Software Engineering?
According to IEEE (the Institute of Electrical and Electronics Engineers),
Augmented intelligence is a subsection of AI machine learning developed to enhance human intelligence rather than operate independently of or outright replace it. It’s designed to do so by improving human decision-making and, by extension, actions taken in response to improved decisions.
AI-augmented software engineering applies the augmented intelligence concept to the realm of software development, maintenance, and improvement. In its practical application, the term describes an approach to software engineering that’s based on close collaboration between human developers and AI-enabled tools and platforms that are designed to assist and extend (but not replace) human capabilities.
To illustrate the importance of the collaborative aspect of augmented intelligence, the IEEE report cites the example of one clinical study aimed at detecting lymph node cancer cells. In that study, the AI system used had a 7.5 percent detection error rate. The error rate for human pathologists was 3.5 percent. But when the human pathologists and the AI system worked together, the error rate was just 0.5 percent.
What Can Today’s AI Do?
Software professionals around the world are now using ChatGPT to gain first-hand experience with the ways an advanced AI platform can extend the capabilities of application developers. Their reports highlight the benefits modern AI tools can provide for software engineering teams:
Writing New Code: According to one report, ChatGPT has “shocked” developers with its proficiency at writing code. As this user puts it, if you tell ChatGPT to do so, “it will happily create web pages, applications and even basic games” in any of the programming languages (such as C, Java, HTML, Python, etc.) that are widely used today. But, as we’ll see below, today’s AI still has some significant limitations in this area.
Explaining and Documenting Existing Code: One of the greatest benefits of ChatGPT is that you can give it a piece of existing code, ask what the code does, and receive a lucid, accurate explanation written in plain language. For developers working with legacy code, which is often highly opaque because of inadequate documentation, that’s a huge benefit that only an advanced AI platform can provide. In fact, the explanations the AI engine provides are so clear and well-written, they can also serve as a great learning tool for less experienced developers.
Enhancing QA and Defect Remediation: ChatGPT can analyze a piece of code to detect and explain bugs that human developers may overlook. It can also suggest fixes for the errors it uncovers. Advanced AI platforms can automate software testing to a significant degree, substantially shortening the development cycle.
Translating From One Language/Environment to Another: Developers can present ChatGPT with code written in one language and have that code accurately translated into the syntax of another language with which the coder may be less familiar.
Turbo Charging Low-Code/No-Code Development: Low-Code/No-Code (LCNC) is already having a big impact on operations in many companies. It allows business users, who may have few technical coding skills, to automate processes in their workflows with minimal assistance from IT professionals. The ability of ChatGPT to produce working code based on natural language inputs democratizes software creation even more. It is, as one observer put it, LCNC on steroids.
What Does AI-Augmented Software Engineering Mean for Developers?
A key aspect of the IEEE definition of augmented intelligence is that it affirms that the purpose of AI-augmented software engineering is not to replace the human element but to assist and enhance it. Jorge Hernández, a Sr. Machine Learning Research Engineer at Encora, explains how this works:
AI-augmented software development helps reduce the cognitive load throughout the software development lifecycle … by helping manage the complexity of the problem, allowing workers to off-load routine tasks to the AI so that they can focus on the creative and analytical tasks that humans do best.
Today’s AI can relieve developers of many tasks that are either mundane and repetitive or, on the other hand, forbiddingly intricate and complex, freeing them to focus on higher-level responsibilities such as architecture and overall design.
For example, by intelligently selecting generic or boilerplate code from the open-source universe and adapting it to the current use case, an AI-augmented coding assistant can relieve developers of the more trivial aspects of the software development process. As technical consultant Rob Zazueta says, “I can take that, modify it to fit my needs and cut through boilerplate stuff quickly, allowing me to focus on the more intensive kind of work the AI is not yet ready to handle.”
Similarly, by uncovering, explaining, and correcting bugs in complex legacy code, an advanced AI platform can save human engineers hundreds of hours of analysis and remediation time.
Deepak Gupta, CTO at LoginRadius, summarizes the impact of AI-augmentation in software engineering this way:
“Artificial intelligence is revolutionizing the way developers work, resulting in significant productivity, quality and speed increases. Everything — from project planning and estimation to quality testing and the user experience — can benefit from AI algorithms.”
Limitations of AI
It’s important to remember that AI engines don’t really think—they simply use patterns they discern in their training data to predict an appropriate response based on the parameters they are given. So, they don’t understand the real-world context of the issues they address. As a result, they can make egregious errors that would be obvious to a human.
For example, although the ability of ChatGPT to turn natural language descriptions into code is extensive and impressive, it has significant limitations in producing usable code on its own, especially for non-trivial coding problems.
When given complex coding tasks, ChatGPT sometimes produces code that, as one software expert put it, “may work but may also almost work.” That is, the code may look as though it does what the developer specified, but have non-obvious flaws that make it unreliable. Needless to say, such code is the stuff of developers’ nightmares.
So, we’re nowhere near the point where software engineering can simply be turned over to an AI coding engine. But what AI-enabled platforms can do is produce code that human developers can use as a starting point, saving time and avoiding many of the bugs that humans themselves inevitably introduce into their code when they start from scratch.
What AI-Augmented Software Engineering Means for App Modernization
Because legacy codebases may be huge (often five million lines of code or more), and may contain embedded dependencies and hidden functionalities that are not obvious to the human eye, refactoring a monolithic legacy app to a microservices architecture is a task that’s normally too complex and time-consuming to be done manually. But when AI and human developers collaborate, app modernization can become a much quicker and safer process.
First, AI can provide insight into legacy codebases that human engineers would struggle to acquire on their own. With AI, the process of analyzing legacy apps to determine if and how they should be modernized can be substantially automated, and the ability of the AI platform to almost instantly assess what a legacy code module is doing and how it functions can save engineers hundreds of hours of analysis time.
AI can significantly streamline and automate the creation of microservices, giving engineers and architects the ability to understand the most effective entry points into a microservice and decide appropriate domain boundaries.
AI can also allow developers to prototype various solutions, helping them understand the practical implications and benefits of each approach. Without AI, developers are effectively working in the dark, spending time and taking risks to experiment. AI brings visibility to the entire process—to what’s been done, what’s being done now, and the probable outcomes of specific “what if” scenarios.
In general, the task of modernizing legacy apps is too complex for humans alone to handle, while AI systems lack the strategic and contextual understandings required for formulating optimal business solutions. But when engineers and architects work collaboratively with a modern, sophisticated AI assistant, they can modernize applications far more quickly, with greater confidence and less risk.
Applying AI-Augmented Software Engineering To App Modernization
The vFunction platform is specifically designed to apply AI augmentation to the task of legacy application modernization, transforming a process that, when done manually, is complex, time-consuming, risky, and costly. With its advanced AI capabilities, vFunction can automatically analyze huge monolithic codebases, and substantially automate the process of refactoring them into microservices. vFunction speeds up the modernization process by a factor of 15 or more while reducing overall costs by at least 4X.
If you’d like to see how AI-augmented software engineering can lift your legacy app modernization programs to entirely new levels, schedule a vFunction demo today.
It Takes a First-Class Seat: Demonstrating the Benefits of Continuous Application Modernization
Continuous application modernization allows organizations to address legacy code in iterative steps. It adheres to an agile-like methodology where incremental improvements are delivered faster with less risk than traditional waterfall methods, where multiple updates are delivered at once. However, an effective agile environment requires a mindset change.
Anyone promoting a continuous application modernization strategy must overcome the tendency to resist change. Individuals are hesitant to accept change when they do not understand its impact. That holds for employees as well as executives. In fact, organizational resistance to change is a primary obstacle to implementing new processes.
Executive buy-in or the lack of leadership support contributes to an organization’s fear of change. Unless management participates in the process, employees hesitate to invest their energies because they do not see a benefit. The proposed change is another “fad” that will be replaced in a month or two. Why invest time and energy in a process that will disappear in a few months?
To ensure project success, IT must first get executive support. Without it, IT departments will encounter employee resistance. So how does IT achieve executive and employee buy-in?
Before you talk about ROI, risk assessments, and budgets, consider the psychology of change. Change management gurus and psychologists cite fear of failure, fear of the unknown, and fear of job loss as reasons for resisting change. However, resistance indicates the reward is not worth the risk.
For example, you have an aisle seat in coach on a full flight from LA to JFK. Just before take-off, the airline offers you a free upgrade to a first-class window seat. Are you going to turn down the upgrade because it requires a change? Probably not.
The same psychology applies to leadership buy-in. If you want executive support, you need to offer them a first-class seat. The question is, how do you do that?
Leadership Support for Continuous Application Modernization Strategies
Gaining leadership support means presenting information that demonstrates to executives that using a continuous application modernization strategy is in the company’s best interests. The rewards of a more agile development environment offset any risks associated with the strategy change. The secret to success is how the data is used to achieve buy-in.
Begin with Data
Companies have more projects than resources. It’s the leadership’s responsibility to decide which projects have priority. They want data to help with decision-making. Executives want to know that the appropriate due diligence has been conducted to determine the project’s scope and cost.
The first step in proposing continuous application modernization is quantifying technical debt. IT must assess the time, costs, and scope to determine debt accurately. The process is time-consuming unless automated tools are used. For example, IT can manually calculate defect ratios by tallying old and new defects, or they can use bug-tracking software to store the information. Other metrics for assessing technical debt include:
Code quality using coding standards
Completion time using hours to correct a reported problem
Rework efforts using bug-tracking software
Technical debt ratio comparing the cost to fix problems versus the cost of building new
While these methods can reduce the assessment time, they still require IT to perform calculations and analysis. Fully automated solutions can eliminate much of the collection, analysis, and calculation required to present a business case for executive buy-in.
AI Modernization Platforms
AI-guided solutions can reduce the assessment time from weeks to hours with little to no IT involvement. For example, AI-automated assessment tools can analyze Java code in less than an hour, providing information on the following:
Technical Debt. Identify sources and measure negative impact if not addressed.
Complexity. Quantify the complexity of each application and prioritize the apps to modernize first.
Metrics. Assess metrics for return-on-investment analysis
Using a fully automated platform enables IT personnel to spend less time collecting data and more time focusing on the benefits that will deliver buy-in.
Define the Process
Many outside of IT may not be familiar with the concept of continuous application modernization. They may not understand how the process differs from more traditional approaches to software development. Part of the buy-in process means explaining how the change impacts an organization.
Suppose a company has a payment processing solution that needs updating to support a different payment type. The project has a three-month deadline. As the project progresses, decisions are made to leave the architecture the same because recasting the payment type as a microservice would delay the release date.
After the software is released, the company can continue with its existing architecture, adding to its technical debt to be addressed decades in the future. Or, the company can add moving the payment type to a microservice to its list of modernization tasks and assign it a priority to decrease the technical debt as quickly as possible.
The ramification to the company is the allocation of resources to work on modernization tasks as part of its routine workflows. That may mean fewer resources are available to work on other projects. For many, this process may seem to be a negative, taking away valuable resources to fix problems with software that is working just fine.
That’s when focusing on the benefits comes into play. It’s these benefits that will convince leadership that modernization is in the company’s best interest.
Focus on Benefits
Achieving executive support for continuous application modernization means addressing change in terms of benefits. It’s about using the data to inform the discussion on why continuous application modernization is the right strategy for an organization. Here are some tips on how to use data to demonstrate how modernization is the best choice.
Meet Key Business Objectives
Instead of talking about money and timelines first, talk about impediments to meeting business goals. Take the payment type example.
Suppose accepting crypto payments is a business objective. Start with what is needed to support that payment type, highlighting the technologies the existing architecture cannot support. Using the data from an automated analysis, explain the time and cost of modernizing the entire application. Be sure to note that the timeline assumes the use of automated tools.
Part of the discussion should include the impact on daily IT operations. When modernization occurs in a waterfall-like approach, all available resources will be consumed in the effort, leaving minimal IT staff to address everyday issues. Historical data should provide information on what percentage of staff time is used in system maintenance activities and support.
Contrast the waterfall scenario with a continuous application modernization strategy that uses iterative development. With a prioritized analysis, discuss a timeline where microservice development is integrated into standard IT operations. Then, compare the timelines. Which approach is more likely to meet business objectives with less disruption and at a lower cost?
Improve Agility
Comparing timelines opens the discussion to another critical business objective — agility. Leadership is well aware of agility’s value to a company’s long-term viability. What they don’t know is how to achieve it. That’s where a continuous application modernization strategy comes in.
Consider compliance updates using the payment example. Payment networks have annual or semi-annual updates that must be completed as scheduled to remain in compliance and continue payment processing. What happens when an update is required in the middle of modernization?
Using data from an automated analysis, IT can determine which microservices are impacted by the updates. They can look at the project schedule and determine the impact on deliverables. A low-priority microservice may need to be higher on the list. If the modernization assessment presents the data per microservice, adjusting the timeline should be straightforward with little impact on the overall schedule.
A waterfall-based strategy could lead to difficult decisions.
If the modernization project is months away from completion, updates must be made to the existing code to remain in compliance. Updates will also need to be added to the modernization code to ensure backward compatibility. When the new code is delivered, the updates may require retesting or recertification since it is a new code base.
If the project is close to completion, the updates can be added to the new code. The existing code would remain untouched. If the new code is not ready as anticipated, the company is out of compliance and risks penalties and fines. The added updates may extend test times.
The compliance example illustrates continuous modernization’s agility. While the changes may impact overall delivery schedules, the strategy delivers the agility needed to ensure operations with minimal risk.
Achieve Buy-In for Continous Application Modernization
Focusing on benefits still requires a detailed analysis of what modernization will take. It needs the same data as is needed if using a more traditional time and materials approach. The difference is in the seat location.
In coach, the executives struggle to see where the plane is going and the turbulence ahead. They cannot reduce the noise to decide what is in the company’s best interests. In first class, leadership encounters less noise and has a clearer perspective of the plane’s path. They are less resistant to change because they see the long-term advantages of a continuous application modernization strategy.
vFunction’s Assessment and Modernization Hub provides organizations with data-driven analysis of what is needed for modernization. The data can then be used to get leadership buy-in when focusing on what facilitates change. Contact us today to get started gaining leadership support for your modernization projects.
As we settle into 2023, today is the perfect time for your organization to start considering modernization in the year ahead. it’s essential that organizations regularly upgrade their operations to remain competitive. Keeping systems current and up-to-date also facilitates stability, growth and higher levels of success.
That said, the variety of options available for app modernization can make it difficult to determine the best course of action. At the same time, the fact that there are so many options suggests there are many innovative minds helping make the process faster and more straightforward. So, if you’re among the many CIOs, CTOs, senior developers, application architects, and system integrators pondering ways to modernize your apps this year, this article is for you.
The Case for Modernizing Your Business Applications
Propelling most digital transformations is the acknowledgment by executive-level managers of the pivotal function of technology platforms in accelerating growth. Of course, most digital transformation projects are focused on migrating infrastructure and apps to the cloud. Nevertheless, indispensable legacy systems such as enterprise resource planning (ERP) systems, mainframe systems, Lotus Notes, and Microsoft’s SharePoint have been generally excluded from such projects.
While it may seem counterintuitive for organizations to hold onto legacy systems that are problematic and costly to maintain, these organizations value the familiarity and dependability of their legacy systems. But those seemingly beneficial qualities are greatly negated by a lack of features and flexibility. In addition, most legacy systems require specialized skill sets to manage them—skill sets that are steadily diminishing throughout most industries.
Many organizations that delay app modernization presumably find it more challenging to assess its value compared to measuring the value of other business priorities. However, modernizing internal legacy apps greatly enhances customer experience and business operations.
Incorporating microservices, for example, facilitates persistent integration and ongoing delivery. This makes trying out new ideas and conducting rollbacks quick and painless. The microservice architecture achieves this by extending cloud support, though not necessarily exclusive to cloud computing.
According to a recent HubSpot article, cloud integration platforms break down software silos, improving collaboration, increasing visibility, and enhancing cost control. HubSpot reported that business departments using individual applications and services cause silos to develop quickly.
Left unchecked, these department disconnects are increasing. Recent statistics indicated by HubSpot show that large organizations typically use around 175 cloud-based applications, while smaller organizations use 73. To bridge this digital divide and allow IT teams to monitor and manage heterogeneous apps from a centralized system, organizations are seeking to modernize their legacy systems.
What To Expect in 2023 for Technology and Cloud Modernization
Business relevancy wasn’t an issue many conventional tech leaders invested much thought or energy into at one time. Before digital modernization became a hot-button topic, organizations addressed business and tech alignment in two distinct ways:
Horizontally. Organizations arranged IT teams according to their various skill sets, such as coding, custom development, and business analytics, among others. Additionally, they would loosely support all necessary operational areas.
Vertically. Organizations dedicated IT teams to aid various departments, such as finance, marketing, sales, security, inventory, etc.
Over the past couple of years, many enterprise IT teams began developing nearly diagonal models, employing the most suitable of both strategies. In a diagonal model, the cloud provides a horizontal technology foundation accessible to any employee. Additionally, organizations are able to build on top of that foundation as they see fit. This restructuring process compels organizations and IT teams to cooperate in developing better ways of operating across the board.
Pivoting From Cloud Migration Toward Cloud Modernization
Over the past decade, cloud computing has gone from being merely a trend to being a megatrend. However, trends are rarely a necessity and are most typically short-lived due to their inclination toward style rather than functionality. Many experts have long argued that cloud computing transcends far beyond a trend and is a necessity.
When it comes to cloud computing, as long as there is an Internet, it will only become more of a necessity for public and private users. In other words, unless another Carrington Event occurs in our lifetime, cloud computing is poised to become as integral to organizational operations as electricity.
Deloitte reported at the end of 2020 that nearly 70% of CIOs viewed cloud migration as one of their top IT spending drivers in 2020. Deloitte supported this by noting that the increased sales of data centers in Q2 2020 increased the revenue growth of the three biggest semiconductor companies by 51%.
Nonetheless, while companies find it easy to invest in data centers, sifting through the multitude of options for implementing data migration complicates the process. But it doesn’t end there.
Many organizations finally bit the bullet in 2022 and made a 2023 New Year’s resolution to migrate to the cloud. As most business leaders know, as soon as you have the funds and resources to upgrade an aspect of your operation, the next innovative solution hits the market.
Trying to keep your company up-to-date starts to feel more like a game of Whac-A-Mole. As we’ve written in the past, it will only become increasingly essential to think beyond cloud migration throughout the 2020s.
By the time the 2030s roll around, many businesses will find it nearly impossible to provide consumers with any sort of value without cloud modernization. Cloud migration is only a step toward the ultimate goal, which is modernization and future-proofing. Replicating legacy technology in cloud environments may seem sufficient at the moment, but this will certainly wain the closer we get to 2030.
2023 will be a year that more leaders realize that even minor modernization projects require the adoption of cloud-native technologies. Of course, microservices are requisite for adopting cloud-native technologies. Once the growing pains of all this have subsided, the scalability and flexibility provided by cloud modernization will help enterprise IT teams innovate, develop, and deliver both faster and more efficiently. Not only will your customer experience have notable improvements, but your organization will be able to collect data across the totality of your business.
Shifting to Data-Driven Decision-Making With Cloud Computing
It’s safe to say that any forward-thinking enterprise wants to evolve into a dynamic data-driven company. As they say, the wars of the future will revolve around data, not oil. We already see this unfolding with private data firms like Palantir analyzing the Ukraine conflict. The U.S.-based company develops software that coordinates satellite imagery to assist the US military in monitoring conflicts and various threats globally.
“Palantir’s software is crucial during bad times for governments to handle the massive amounts of data they need to make a change,” the firm stated in a self-published press release. What pertinence does this have on the topic at hand, you might be wondering?
If data holds such value to governments and possesses enough power to influence geopolitics, imagine the benefits data offers companies that use it strategically in business. This is why we believe that the algorithmic enterprise will no longer be an ideation but a tangible reality.
Pivoting from gut-feeling analytics to purely data-driven analytics requires a robust, dynamic cloud foundation. The cloud offers the high level of computing power required for enterprise-level analytics. That computing power comes from awesome innovations such as unsupervised artificial intelligence, data mining, machine learning, and predictive models. In addition to that, cloud platforms assist enterprise leaders in democratizing data access.
Most importantly, various types of data will become available to more employees and departments besides IT. The end result will be your teams tapping into treasure troves of data-driven sagacity leading to them making the best decisions most if not all of the time.
Provide Value Points by Modernizing Your Business Application
Gartner published a report predicting that spending on cloud projects will be over 50% of IT budgets by 2025 compared to 41% in 2022. This increase in cloud investment has resulted in many more organizations launching app modernization projects. If you’re considering, planning, or beginning to modernize your business application, you will want to ensure that you get the most value from your modernization efforts.
Michael Warrilow, research vice president at Gartner, stated:
“The shift to the cloud has only accelerated over the past two years due to COVID-19, as organizations responded to a new business and social dynamic […] technology and service providers that fail to adapt to the pace of cloud shift face increasing risk of becoming obsolete or, at best, being relegated to low-growth markets.”
In today’s cut-throat market, being relegated to a low-growth market is often a warning sign of becoming obsolete. To prevent this, it’s important to extract as much value as possible from modernizing your business application.
In other words, it might not always be enough to go through the process without establishing value points particularly critical to your company. Modernization shouldn’t be approached as merely doing the minimum to get by. The value points enterprises most commonly concentrate on include improved scalability, increased release frequencies, better business agility, boosted engineering velocity, and more adequate levels of security.
The vFunction Architectural Observability Platform helps many organizations properly formulate their modernization strategies. It’s a purpose-built modernization assessment platform for decision-makers, empowering them to evaluate technical debt, risk, and complexity. Request a demo today to see for yourself how we can assist with your transformation.
As technology continues to evolve, businesses are increasingly turning to microservices and containerization to improve their IT operations. Container management is the process of overseeing and maintaining the containers that hold the microservices in a distributed system. It enables the efficient deployment, scaling, and management of microservices.
By understanding container management, you can better navigate the complexities of microservices and gain insight into the best practices and tools to deploy and maintain them efficiently. Defining what it is, why it’s used, and why a management strategy is needed will lead to more effective operations and better overall performance of your IT systems. Here’s what you need to know about container technology.
The Role of Virtual Machines in Container Technology
Container technology grew out of virtual machine partitioning in the 1960s. Virtual machines (VM) let multiple users access a computer’s full resources through a single application. VMs were used to install multiple operating systems and applications on a single server.
Unlike VMs, containers have shared resources. Many deployments used multiple microservices in a single container. That complexity hindered container usage until automated management tools were developed. Now, system architects have access to solutions that make containerization software more reliable. These tools make up what is known as container management.
In contrast to VMs, containers require fewer resources and are faster to initialize, but those capabilities often make deployment more complex. When virtual machines were first introduced, developers used them to install multiple operating systems and applications on a single physical server, each in their own isolated environment.
Instead of having multiple servers running different development environments, IT could maximize the use of physical servers through VMs. However, they were resource-heavy and slow to spin up.
Security was not a major concern when VMs first hit the market, so their deployment allowed unrestricted access to any environment running on the same device. In the 1980s, developers began looking at ways to restrict access to improve system security. Limiting file access to specific directories was the start of container-style processes.
The Growth of Container Technology
Docker released its container platform in 2013. It was a command-line, open-source solution that made container deployment easy to understand. It was well-received, and within a year, the software was downloaded over 100 million times.
In 2017, Google’s Kubernetes became the default container tool as it supplied scheduling and orchestration. When Microsoft enabled Windows Server to run Linux containers, Windows-based development could take advantage of the technologies. That change opened the door to more container-based deployment.
As more organizations moved to the cloud, container management became a concern as deployment and monitoring remained complex. Today, many cloud providers offer managed container services to deliver streamlined solutions with independent scalability. Ongoing development looks to incorporate AI technologies for improved metrics and data analysis, leading to error prediction, incident resolution, and automated alerts.
According to Statistica, the container market is expected to reach $5 billion US dollars in 2023 with a year-on-year growth rate of 33%. Whether considering or expanding the use of container technologies, organizations should develop a strategy for how to incorporate container management into their modernization plans.
To understand the role this technology plays in a company’s efforts to reduce its technical debt, IT staff should evaluate the landscape, beginning with what container management entails.
What is Container Management?
Containers are virtual operating systems that allow applications, their libraries, and dependencies to operate in one deployable unit. Often used in conjunction with microservices, containers enable multiple applications using the same operating system kernels to function as self-contained code. Containerization makes for lighter-weight implementations with greater interoperability than VMs.
However, containers have to be managed. They have to be deployed, scaled, upgraded, and restored. As more companies look to cloud-native containers, managing hundreds or thousands of them becomes overwhelming. That’s why the market for container management continues to grow.
A set of “tools that automate provisioning, starting/stopping and maintaining of runtime images for containers and dependent resources via centralized governance and security policies.”
Container management solutions may be platforms that operate as software-as-a-service or software solutions installed locally. Container management enables developers and administrators to realize the benefits of containerized microservices while minimizing potential errors.
Why Use Container Management?
Container management tools simplify the deployment and monitoring of containers. With these tools, IT can stop, start, restart, and update clusters of containers. Automated tools can orchestrate, log, and monitor containers. They can perform load balancing and testing.
IT departments can set policies and governance of a containerized ecosystem. As more organizations move to a container infrastructure, they will need automated tools to manage large environments that are too much for staff to maintain.
Other benefits of container management tools are:
Portable. Containerized applications can be moved from the cloud to on-premise locations. They can move to third-party locations without integration concerns, as the technology does not depend on the underlying infrastructure.
Scalable: Microservices can scale independently with minimal impact on the entire application.
Resilient: Different containers can hold applications so that a disruption in one container does not result in a cascading failure.
Secure: Unlike VMs, containers isolate applications. Gaining access to an application in a container does not automatically give bad actors access to others.
Containers are lightweight with minimal overhead, making for fast deployment and operation. However, turning monolithic legacy code into containerized microservices can introduce vulnerabilities unless carefully orchestrated. Container management is not without its challenges.
What Are Container Management Challenges?
Management complexity and the lack of a skilled workforce are the primary obstacles to containers and their management. For example, containers only exist when needed. When a container is no longer required, it spins down, taking all of its data with it. Maintaining persistent data across containers poses significant problems if not well managed.
Complex Container Management
What happens when a container needs data that is no longer available? How can developers ensure data is persisted? How can they identify these errors with hundreds of clusters of containers to manage? Without container management, it’s almost impossible for developers and architects to deliver an error-free implementation.
Application isolation protects against VM-type vulnerabilities; however, orchestrators can expose containers to attack. APIs, monitoring tools, and inter-container traffic increase the attack surface for hackers. Using security best practices, such as only using trusted image sources, reduces authentication and authorization weaknesses. Closing connections minimizes the number of entry points for bad actors.
Lack of Skilled Staff
The most significant challenge is the lack of experienced staff. Organizations need a thorough understanding of the scale and scope of a containerization effort. They need a roadmap that outlines how existing code connects and communicates to ensure relationships are retained. Since containers can run in multiple environments, architects must define the business objectives behind the move to modernization to ensure the right infrastructure is in place.
A recent Gartner survey found that Kubernetes and infrastructure-as-code were the most sought-after skills. Some organizations are developing expertise centers. These groups are tasked with helping in-house staff as needed while using the opportunity to train others to reduce the skills gap. Others are looking for outside sources to help with knowledge transfer.
With the drive to modernization using microservices and containers, companies need a container management strategy to address the critical challenges that a complex container architecture can present. Without a reasoned strategy, organizations face a technical debt greater than that of legacy code.
Why is a Container Management Strategy Needed?
Container management should be part of any modernization strategy. However, its complexity requires a roadmap that includes the following:
Isolation of users and applications
Authentication and authorization protocols
Resource management
Logging and monitoring
Long-term container usage
Multi-cloud platforms
Microservice development
The plan should include methods to address edge computing, cloud implementations, modernization, and training.
Implementing Edge Computing
With edge computing, the volume of data makes management difficult. Moving all the data to a central location poses performance concerns since much of the data is being captured at the edge. More organizations are looking at building edge infrastructures to prepare the data before sending it to the cloud for processing.
Containerizing applications at the edge to allow data ingesting and cleaning should be part of any strategy. Containers can improve AI implementations or data-intensive processing and reduce cloud storage costs by placing them close to the data acquisition point.
Refactoring Cloud Implementations
Within the last three years, many organizations have moved all or some of their infrastructure to the cloud. Unfortunately, for many, the move was rushed. Monolithic applications that were migrated (lifted and shifted) to the cloud or simply containerized as a full monolith were brittle and could not scale or be updated without refactoring or rewriting. Making architectural changes would require careful planning by developers and business units to minimize disruption without compromising modernization.
Companies cannot assume that simply migrating an application to the cloud removed any technical debt at all. By refactoring cloud implementations, IT departments can ensure that their cloud deployments reduce technical debt.
Modernizing Monolithic Applications
Breaking monolithic software into microservices and containers requires a roadmap that helps developers navigate a way forward. A migration strategy also highlights skills gaps that can be addressed before modernization is underway. As container management tools improve and more organizations move to a cloud-native environment, a clear strategy is needed to ensure that the migration does not increase technical debt.
Few modernization efforts can be completed at once. The so-called big bang approach increases delivery times and introduces many of the same issues that monolithic structures present. Iterative approaches make managing thousands of lines of code more manageable and reduces the risk of operational disruption.
The vFunction Modernization Platform and Container Management
vFunction’s refactoring platform for Java and .NET applications can help organizations realize the benefits of container management. Using its platform, organizations can decompose monolithic apps into microservices for container deployment in cloud-native environments. The platform can serve as a tool for developing well-constructed microservices that minimize the risks to container implementations.
As companies plan for 2023, they will look to containers and their management as a modernization path. Without a complementary modernization strategy, the resulting infrastructure can prove problematic. For help modernizing Java and .NET applications that take advantage of the cloud environment, contact us to see how we can work with you to develop a strong implementation strategy.