Mexico City, 1968 Olympics. At a time when all the world’s high jumpers were doing a straddle technique, one Dick Fosbury executed a seemingly impossible move, winning gold by keeping his center of gravity lower and rolling over the bar backwards.
His unique approach flipped the script on the sport, and the high jump has been dominated by the “Fosbury Flop” ever since.
We’re experiencing a similar situation in many of today’s enterprises. All well-established companies are staring up at an ever-higher bar of pre-existing application and integration code. Now, they are inevitably starting to seek a way to modernize this inflexible code and regain some level of agility to respond to market challenges.
It’s time for companies to flip the script on modernization without creating prohibitive effort or accruing additional technical debt.
A conventional wisdom has developed over the last decade–supported by analysts and cloud experts–that moving applications to cloud computing infrastructure is not only desirable, but absolutely necessary to any modernization effort.
The fastest growing unicorn companies like Netflix and Uber were born in the cloud, so why not emulate their path to success?
For an established company, the jump to cloud usually means replatforming applications–lifting VMs and databases from on-premises or conventionally hosted servers onto a pay-as-you-go model with a cloud IaaS provider.
Yes, the old lift-and-shift is alive and well in the cloud. After all, if you can’t build net-new application functionality from scratch, at least putting everything on cloud infrastructure might enable forward progress in flexibility, right?
Lifting virtual servers and J2EE binaries and extracting data onto cloud services does provide some nice capabilities such as reserving excess capacity or maintaining parallel instances of whole environments for failover or recovery purposes. But the heavyweight nature of monolith applications in the cloud also means you can rack up surprisingly high compute and storage costs, while still making new functionality hard to add.
The applications were never optimized for cloud in the first place, much less being ready for truly modern cloud-native agility for change and efficient scaling.
There’s one big reason why lift-and-shift is still going on, even if it is simply kicking the can down the road for a later reckoning.
Refactoring sounds really hard. Picking apart legacy application code to rewrite functions is the legendary stuff of all-nighter incident responses to show-stopper bugs. It’s not the kind of work most teams would willingly sign up for.
Maybe it is time to change this mindset and rethink the possibilities for refactoring toward a modern architecture well-suited for a cloud native future.
Fortunately, today’s enterprise can take advantage of a secret weapon. The state of the art in data science, automation and AI has advanced over the last decade–and now it’s ready to apply to code itself.
Even legacy code is still a set of instructions for an application on an intended target system. Therefore, by embedding domain-driven expertise and the core concepts of cloud architectural practices within refactoring itself, companies can ‘shift architecture left’ and rethink the monolith as a modern platform target for an existing application.
Enterprise architects–or possibly CTOs and CDOs for digital product-oriented companies—are the primary coaches for setting a game plan for shift-left refactoring.
These leaders need a broad understanding of the entire application estate of the business as it exists today. What functions of the monolith are essential to ongoing operations? Which ones are creating roadblocks to improvement and change?
Architects set the vision for moving toward an API-driven, microservices-oriented stack, and then identify and coach the development leadership necessary to compartmentalize those old J2EE or Spring monoliths, one container at a time.
Once architectural priorities are set, development teams must also buy in as primary stakeholders for success. After all, most engineers would much rather not rebuild old functionality from scratch, nor manage a monolith in the cloud, if they knew refactoring offered a less resource-intensive and faster path to agility.
Development leaders are responsible for applying refactoring automation, much like an additional team member with specialized skills. Developers and integration experts can then focus on assuring the newly converted microservices are executing on functional and performance goals in the modernized target environment.
An international European banking group, Intesa Sanpaolo, accomplished exactly this kind of transformation over the course of 3 years, applying the vFunction Platform to automate both dynamic and static analysis of Java applications, interpreting application flows across Spring/EJB and WebLogic business logic to detect and extract key business domains. The team generated containerized microservices on JBoss and OpenShift for its own enterprise PaaS platform, ready for Kubernetes-driven cloud deployment.
The bank reported a 3X increase in feature delivery rate, with a labor reduction from months of manual refactoring effort to 1-2 days in some instances. But more importantly, the modernized codebase also improved the application’s performance and reliability on the production platform, while making it ready for faster future customer enhancements.
Dick Fosbury turned his back on the conventional technique, and since then, every winner now does the high jump backwards.
Enterprise architects and development leaders might take a pointer from the Fosbury Flop–challenging the accepted norm of modernizing the target infrastructure before the application code.
Rather than lift-and-shift, try taking advantage of new intelligent tools for refactoring code first, not just to save time and money, but to gain a sustainable advantage in a competitive arena, where agility and resiliency in front of customers wins every time.