How to Close the Growing Application Modernization Gap
Best practices to become an elite modernization performer
Featuring James Governor, RedMonk
The desire to modernize legacy applications has never been stronger, but the industry has lacked the tools and requisite best practices. Elite teams building new applications have dramatically increased their release frequency and engineering velocity, and these same practices can now be applied to legacy Java application modernization and refactoring.
In this webinar, James Governor from RedMonk and the vFunction team will explore how to increase your automation, observability, and continuous modernization skills to massively accelerate Java app modernization velocity and success.
Other Recommended Resources
The Best Java Monolith Migration Tools
As organizations scale to meet the growing tsunami of data and the sudden rise of unexpected business challenges, companies are struggling to manage and maintain the applications that run their business. When unprepared, exponential data growth can tax a company’s legacy monolith Java systems and their IT departments.
Intesa Sanpaolo Case Study (pdf)
In this case study we will describe the challenges, how Intesa Sanpaolo decided to convert one of its main business-critical applications from a monolithic application to microservices, and how a platform called vFunction helped to turn this challenge into a success.
The vFunction Return on Investment (ROI) calculator determines the key benefits – based on actual customer metrics – achieved by using the vFunction cloud native modernization platform including:
- Time to Market Acceleration
- Total Cost Saving
- Total Time Savings
0:00:25.5 Speaker 1: Okay, good morning everybody. My name is Kenya, and I’ll be your host as we discuss how to close the growing application modernization gap. Presented to you by vFunction and RedMonk, and hosted by Virtual Intelligence Briefing. Today’s webinar will be about 45 minutes, and we’ll save the last 10 to 15 minutes for Q&A. So please use the Q&A button at the bottom to ask questions, and we’ll do our best to answer all questions. For those that we don’t get to, we will definitely follow up with you after. And now without further ado, I would like to introduce our first speaker, co-founder of RedMonk, James Governor.
0:01:02.7 Speaker 2: Hello everyone. Great to be with you today. And it looks like some people are still coming in, so that’s great, fits the theme which is “velocity.” I think we look at the landscape today of technology and everyone is asking, “How do we go faster? How can we ship more digital services to our customers, employees, citizens, more quickly?” Speed is of the essence. We talk about developer velocity, ’cause we’re very, very concerned with the velocity of the business. It doesn’t matter what sector that you’re in, we’re seeing more and more competition from new entrants. Not every geography as yet has a lot of FinTech or insurance tech innovation, but there are some that do. Certainly if you look at London, where I am based, new insurance companies are popping up all the time, there is a pretty rich vein of new very, very consumer oriented banking services. And of course, when we look around at the competitive landscape, we are seeing those hyperscalers also become a threat going forward. Apple begins to look more like a financial services company or a healthcare company. If we think about Apple Health, the possibilities there for competition grow evermore fierce. I think we had a great example throughout the pandemic actually, retail businesses needed to respond extremely quickly to a significant change. Hospitality businesses the same.
0:02:31.8 S2: One of the reasons I think that retail organizations, like Target, say, or Walmart, were able to respond with some speed was ’cause they already had a fire underneath them that had been lit by Amazon Web Service. Competition is fierce in all sectors, we all wanna move faster. Can we go to the next slide please? How do we move faster? How do we deliver more software with quality? There’s a great research project by Dr. Nicole Forsgren at DORA, the DevOps Research and Assessment, that was acquired by Google. It’s a great piece of work, and over the years, she basically surveyed organizations to try and understand elite performance, to try and understand the performance of organizations and how they shipped code and how they shipped it well. So, next slide.
0:03:18.0 S2: For me, one of the key insights, and these have now become metrics that are being understood throughout the industry, so how quickly we can deploy new services, how quickly we can recover and so on. But the key thing to me that we’ve found about elite performance is, not only are they shipping dramatically more code, they are improving and delivering new services far more quickly than their competitors, but they’re also doing so with fewer failures. So this idea of moving fast and breaking things isn’t really actually a thing. The truth is that the elite are moving fast and not breaking things. So, how are they doing that? What are they doing? Let’s carry on.
0:04:00.7 S2: Next slide, please. Yeah. One of the ways that historically we build things, we build things as monolith. And monoliths were great. We could build the functionality, it was well understood. And these things were not going to change very much over time, where the expectation was not this world of velocity. I’d perhaps be delivering a new banking application every six months, not every six hours, that was the expectation. So we were happy with monoliths. We could build them, we could add to them, but they really weren’t flexible. To move quickly, and the kinds of organizations that we’ve talked of and described as elite performers, we’ve begun to break things down into microservices. This is Uber’s microservices graph. Really pretty complex, 100s if not 1000s of services are all connected, communicating with one another, that creates complexity, but what it’s enabled is a more effective way of work where you’ve got more product ownership, you’ve got smaller teams. Amazon Web Services loves to talk about the two pizza team that is, they’re focusing on product, they’re not held back by what everyone else is doing, they’re working individually on particular components and services, you then comprise as a whole. And that’s one of the things that we’ve seen from these lead performers and the kinds of organizations that are doing DevOps.
0:05:22.5 S2: So we move forward. We’re here talking about… Hosted by vFunction. So, let’s talk a bit about Java and the Java landscape. Here’s one thing that I think is very interesting. When we think of a lot of those applications that were monoliths in the enterprise, a significant number of them were built in Java. Java has been around for a long, long time. Pretty much my tenure in this industry has been defined by the era of Java, and there’s a reason you’re here today, you’re thinking about application modernization, you’ve almost certainly got some Java in your estate that you’re looking at. One of the things that we found interesting at RedMonk, my research company, we do regular programming language rankings. The way we do that, is we look at behaviors. We don’t survey people. We believe that a lot of good developers, they’re not gonna fill in surveys, but there are breadcrumbs that we can pull together to understand behaviors. What we do is we look at the conversations that are happening in Stack Overflow.
0:07:17.3 S2: Hey look, Java’s back. There was enough activity around Java modernization and some of the things we’re seeing around Java 17, we’re seeing cloud companies involved and so on, that Java had actually tied again with Python. So never write Java off. I said my time in this industry has been sort of book-ended by Java. I’ve been doing talks that explain that Java isn’t dead for almost that entire time. Java’s in pretty good health. The web companies are using it and so are the traditional enterprises. Next slide, please.
0:07:49.0 S2: So let’s talk about those web companies, the investments that they’re making. Microsoft, just a couple of weeks ago a job ad goes up. They are hiring for a… They’re hiring for an Azure functions engineer, their new serverless technology, with a Java background. They’re investing in Java there. They acquired a company called jClarity, which has been a big part of that. They’ve got some of the best Java advocates on the planet. There’s a great guy called Bruno that anyone in Java will know here, Bruno Borges.
0:08:25.7 S2: So if you look at what they’re doing, they’re investing heavily in Java because they want to win those enterprise workflows. So Azure is investing from an engineering perspective, they are investing from a community perspective. What about Amazon? Amazon, a lot of Amazon is built in Java, frankly. Obviously, they’re making further investments now in the likes of Rust, but Java’s been core to a lot of those services that we know and use on a daily basis. And again, they wanna win these workloads. There are no kinds of workloads that Amazon doesn’t wanna win. They’re also investing accordingly. They want high performance Java systems running in the Amazon cloud. Google Cloud, no exception. So we’ve got the cloud companies that just want to win all of the workloads. They wanna take these enterprise workloads and bring them into the cloud. In order to do that, they’re gonna have to show some Java chops. And on that note, it’s interesting to me… We think about the movement around cloud data and everything.
0:09:23.1 S2: Let’s not write off Oracle. They’ve even created a Kubernetes distribution to Oracle Cloud, specifically designed for web logic workloads. So there’s a lot going on, in terms of the hyper-scalers, the cloud companies, wanting to absorb and bring those Java workloads running in your shop onto the cloud. Next slide, please. Okay, so I think for me, the big question is, well, yeah, what does the customer need? What is the job to be done? And it’s not that every workload needs to immediately go in the cloud. That kind of lift and shift doesn’t always make sense. But there are no IT organizations today that are not asking questions about, “Can we have more portability of our workloads into the cloud? Can we be cloud native? Can we move more quickly? How are we gonna do that? How can we get a better handle on our application estate?” You know, this is true of any organization today that has legacy applications. They’re asking, “Is this stuff we should deprecate? Is this stuff we should re-write? Are there particular services that are gonna be of most benefit to us internally or externally? How do we understand our code?”
0:10:33.7 S2: And those are the kind of things… Those are the jobs to be done, and that’s what are needed from the industry, in terms of getting those workloads to be more useful and fit that velocity imperative that we talked about. Next slide, please. So in that world, we see stuff we’re like, “Okay, let’s have this hybrid world.” For me, it’s all multi-cloud. I think hybrid’s a bit overdone. Or we have hyperscalers, but then we’ve definitely got this phenomenon that some workloads are gonna remain on-prem, but cloud native and modernized. And there we’ve got an emerging landscape. Susa came in and bought [0:11:09.5] ____, because they understand that there are gonna be on-prem workloads. VMWare with Tanzu, they got an interesting play, because of course, there Spring connection. VMware still owns the Pivotal assets, and Spring has been a big part of… And Spring Boot has been a big part of the Java modernization wave. We’ve got Google. Even Google, you think of them, “Oh, run it like Google. Google knows best. We’ve got bored. We don’t need to… ” No, they did Kubernetes, and now they’re coming on-prem with their platform, Anthos.
0:11:39.6 S2: And of course, some of those workloads are gonna be Java-based. So they’re happy to invest accordingly. And then, of course, OpenShift. It’s no surprise that Red Hat needs to get a handle on this too. Red Hat’s a really interesting one because they got ahead the game with OpenShift. It was originally a PaaS platform. They pivoted to Kubernetes. They got a lot of initial market traction. They’ve sold the hell out of it. They’ve done well enough on that regard that IBM came in and made that acquisition. But you’ve got this… Okay, we’ve got Kubernetes. We’ve got our JBoss workloads. But how do we actually get the workloads into OpenShift clusters? What if they’re empty clusters? Organizations have invested in Kubernetes without being ready to move their application estate over. So how are they gonna do that? How are they gonna get a handle on that? And that’s one of the big questions that a lot of organizations are struggling with. On the vendor side they’re asking for help from GSIs and so on. So that’s the question for today. How do we modernize? Next slide, please.
0:12:41.3 S2: If we think about modernization, we’re thinking about a step of practices that lead to production excellence. Production excellence is a term framed by a company called Honeycomb and their founder, Charity Majors. They’re an observability company. It’s right off the bat observability. What’s that? Okay. Observability is like saying, “Hang on a minute, the way we’ve done monitoring and logging and so on, has really not been about serving the developer and it has not been built for this world of fast application changes. And even things like testing in production.” Are you terrified to make a system change on a Friday? Most organizations are. Charity argues in production excellence, where you’re taking DevOps practices, agile, things like what I call progressive delivery, where you’re doing Canarying, AB testing, and so on. And you’re really investing in ‘If you break it, you fix it.’ Observability is about this: We need to understand the performance of the application all of the time, and always be thinking about how it’s performing. And then we’re better able to troubleshoot when something goes wrong, rather than just having those metrics and being like, “Oh oh, wait, what’s happening with memory on this particular day?” And going in and coming in after the fact.
0:13:55.1 S2: So observability, progressive delivery, all of this is built of course on CICD, everything good in modern software delivery is, agile, strong testing, DevSecOps, shifting testing lab. All of that stuff comes together in terms of how you can move faster, move fast and don’t break things. Next slide, please.
0:14:16.4 S2: So, what do we have to do? We need to address this developer experience gap. We’ve got a question of these managed operating models, not leaving all the complexity to the user. And there’s so much Java out there, and there’s still a lot being built. Arguably, we could say that when web companies grow up, they turn into Java shops, and then maybe at JBM there still a lot of Java being built. More or less a preventionist modernization. But the problems with applications and microservices, the industry needs better tools for this. They need better automation. And it’s all about platforms, ’cause platforms will… That’s what where we’re already hosting. Platforms can enable some of that culture change. And talk about culture change, I just wanna finish with some stories about Netflix.
0:15:01.0 S2: Netflix, they didn’t begin like the organization they are now. When we look at [0:15:04.7] ____ and the elite performers, even in that respect, Netflix is way, way beyond most other companies in this industry in terms of how they perform. Now, there’s a guy called Adrian Cockcroft. He used to work at Netflix, and he would tell a great story. So he would go out, and he was a brilliant evangelist, go and explain how Netflix had moved to microservices, was so much more productive than it had ever been, could do production excellence. Broke things down, small teams, and there were a couple of key stories there.
0:15:40.5 S2: One, what would happen was, CIOs would say to him, “Oh, Adrian. Netflix is so amazing. But you’ve got all of this talent. Where can we get this talent so that we too can be like Netflix? Where did you hire these people?” And Adrian would always reply and look at them and he’d say, “Oh, I hired them from you.” The simple truth is just they had a culture of excellence and a culture of hiring well, and they would try to invest in people, and therefore, they were able to do this.
0:16:08.3 S2: The other really, really important thing that I think that we need to think about is look, Netflix didn’t begin as it is now. Nor did Amazon. These applications began as massive Java hairballs, the same kinds of things we see in every enterprise across the industry. But the fact is, they did the work in order that they could break things down into small pieces that were loosely joined and they could perform accordingly and move more quickly. So the question for the rest of us, maybe who don’t have the ability to spend the kinds of salaries that Netflix does, and that’s why we need tools and platforms in building out this new ecosystem where we can take our applications, be more productive. And on that note, I’d love to hand over to vFunction to say a little bit more about how they are trying to address this gap that we see around application modernization.
0:17:00.3 Speaker 3: Yes. Well, thanks James. This is Bob Quillin. I’m the chief ecosystem Officer here at vFunction. I’m going to talk a little bit about taking what James said and applying that in deeper into what we do, and really trying to look at the problem that’s being solved and addressed in application modernization, some of the big issues today. Our focus is on this $3 out of every $4 that’s been spent in IT today is on legacy systems, and how do you shift this paradigm and begin to move this investment forward into innovation?
0:17:33.5 S3: These monoliths that are eating up budget, eating up your spend, slowing down velocity. It consists of issues around legacy licensing and supporting outdated platforms. Supporting these platforms creates security holes, there’s growing and growing technical debt that weighs you down. We look at this and hear from customers that this is a boat anchor for DevOps especially, ’cause DevOps is about faster release frequencies, faster lead times, faster recovery, trying to decrease your failure rates.
0:18:06.1 S3: But all those things don’t come without actually creating a much more agile and microservice based architecture. But application modernization around these legacy systems is slow. It’s very, I’d say it’s not very modern today. And why is that? Well, really, the challenge has been that this is the state of the art. This is really where the last 10 to 15 years have been spent trying to do application modernization; locking yourself in a room, looking at sticky notes on a board trying to figure out the architecture in a very manual, painstaking fashion.
0:18:41.7 S3: But what came out of this set of best practices is things like event storming, domain driven design, the strangler patterns. We kinda know how to move an application manually. But the question is, how do you do it with some automation? So, there’s gotta be a better way. So instead of actually looking at doing this on a manual basis, many people have opted for re-hosting or re-platforming their applications and their legacy apps. In effect lifting and shipping them to the cloud.
0:19:14.0 S3: It kicks the can down the road, provides some limited value to get started, but it really is proven to be very disappointing because in order to get the full value of the Cloud, the stuff that James just talked about, the elasticity, the velocity, the innovation, all those release cycles, reducing your technical debt, you need to re-factor those legacy apps. And you can’t do it manually, you need automation, and you need to do this not just once, but on a continuous basis. And we’ll talk about that in terms of where the market is going and what the challenges are. The goal then is to take these legacy apps and move them into microservices, into a loosely coupled architecture. And these loosely coupled architectures are a top predictor of production excellence. You hear this from the DevOps report, the DORA reports that James has talked about.
0:20:04.9 S3: The strongest predictor of successful continuous delivery and production excellence, is really reducing those fine-grain dependencies. It allows teams to operate independently of each other; they could scale, they could fail faster, they could test faster, deploy faster, and everyone can work on their own pace, recover fast, and all should begin to reduce all that technical debt that’s weighing you down as that boat anchor going forward. The problem is, in order to get all those benefits, legacy apps are like a sad dog. They’re from the outside looking in. They want to be part of this game, they wanna be part of modernization, they wanna be part of all these DevOps benefits in Cloud native architectures, so how do they do that? I say, let’s take a cue from DevOps and look at how elite performers are doing this on the DevOps side. And the goal was to get DevOps to get acceleration, scalability and do this on a repeatable basis, and do it faster. And really that’s what we’re trying to do on modernization, bring all that automation into application modernization, be able to add observability, do this on a continuous basis, build pipelines, build a factory and couple loosely.
0:21:14.6 S3: So the idea then is by doing that, you can get more frequent co-deployments, all the faster lead times, lower change failure rates, all those benefits there on the right that we’re looking for as we move forward into the Cloud native world. So the goal then, and we’ll be talking about this further in demos with Amir in a second, is really to lower the debt, and that’s the step one. This allows you to then automate, observe and accelerate on a much more, on a simple basis. It allows you to innovate and meet all of the business requirements, reduce that technical debt and reduce the spend. Three to four acts, we just talked about it early on, how do you reduce that and get more innovation in your environment. And we’re going to focus on the 80% of the apps that haven’t been moved to the Cloud yet and haven’t moved to Cloud native architectures. So once you begin to lower the debt, then you begin to manage the debt for continuous modernization, so it’s a two-step process. We see this moving forward in a much more continuous basis to increase modularity, and find and eliminate dead code, and provide a continuous way to improve and modernize your applications. I’m gonna hand over to Amir now who can walk through the product itself, tied to architecture, and talk a little bit, and do a demo and walk you through how it works.
0:22:29.8 Speaker 4: Thank you, Bob. I’m Amir, I’m the CTO at vFunction. I’ll start by answering a question that we got from Sheesh. While we’re moving to continuous modernization, will the budgeting probably at 75% to maintain legacy will get resolved. So I think this is just in line with what Bob was saying, so that’s why I’ll answer it now. So your first task is really to lower your technical debt. So your technical debt got to a certain point where you have to modernize the application. But even if you lower the debt, the technical debt doesn’t reach zero, so you’ll still have to spend some on paying off technical debt constantly. And the idea is that if you constantly manage, so that’s the step two, constantly manage your technical debt to keep it low, to keep the architecture sound, to get rid of architecture smells and code smell, then you’ll be able to keep it at a lower percentage.
0:23:37.1 S4: But now the demo I want to talk to you about or show you, the vFunction platform and how it solves the first part of lowering the technical debt, is where I want to focus on. So the vFunction platform, which we see it as a shift left for architects, that really allows architects to understand how the application is actually built, how it’s actually working in production and not necessarily only how it was planned or designed or architected. So this is the platform. So it’s a three-phased approach. The application starts by learning the platform, the vFunction platform starts by learning your application. So it starts by learning that your monolithic application. We do some dynamic analysis, we track what’s actually running in the application, and we track access to interesting objects. In the JVM, these objects might either hint to us the domain of that specific flow with the code, or it can hint to us that there’s a certain amount of constraint to extracting that flow and modularizing that flow along with some other bits.
0:24:52.4 S4: So if there’s like a lock or synchronized object, then maybe the two different flows that access that same synchronized object should be taken together and extracted, because otherwise you’ll need to refactor more code in order to make it modular, so this is the first part of the learning. All of this information is sent to a vFunction server, which is installed on premise or in your Cloud, so it’s client installed, and there the analysis happens. So we have some machine learning and AI that takes all of this information, and it’s a lot of information that we’re talking about, graphs with millions of nodes from within your application. And based on that, we do a lot of analysis to offer you a starting solution on how to extract services from your application. So this platform, which I’m gonna show you in a minute, it’s an interactive platform. It allows you as an architect to completely design the microservices or the modules that you want to extract, and it will show you a blueprint of exactly how to do that.
0:26:04.1 S4: So the platform both raises the right questions and provides you with the right tools or the right metrics to make conscious decisions, even if you’re not experienced in application modernization, which we know there’s a skill gap there. Not a lot of people have experience with actually modernizing the Java monolith. After you do this analysis and you’re happy with the new architecture, the proposed architecture, the system also has some automation to help with the service extraction. So the system can take the code, copy the code aside, do some automatic refactoring, some automatic refactoring, create a profile or grade or build script, so you can actually build stacks as a microservice and then deploy it on a container in some modern infrastructure. So this is it. I’m gonna show you the demo now, so I’m gonna show you what it looks like and how we take some sample applications, and we modernize them, and we reduce the technical debt level to a minimum. So let me jump to this screen here.
0:27:22.1 S4: So what we’re seeing in front of us is the analysis screen of the vFunction platform. What you see in front of you is an application that we call OMS. OMS is an Order Management System. It’s a demo application to mimic a retail order management system. So there’s an inventory, there’s pricing, there’s shipping, and what you see here is the automatic decomposition of this application using the vFunction platform. So every one of these circles, these spheres, is actually a service. The size of the sphere indicates the size of the service in terms of number of classes and the color indicates exclusivity of these services. Exclusivity is a very important term. What the system tries to optimize is that the percentage of classes that are needed for the service would be exclusive to that service. ‘Cause if we can get that all of the classes that were within the service are either from a library or exclusive to that service, it means that we covered the whole domain, that we can take that as a module, we’re not leaving any remnants around in the application. So green is high exclusivity, blue is medium exclusivity. And in real life situations we see a lot of pink, which is a lower kind of exclusivity that the architect needs to work through and understand and kind of massage and refine the boundaries of the services to really define the best domains and the best modules or services to extract.
0:29:00.5 S4: So let’s focus a little bit on this service, the Modified Fulfillment Service. So what we see here, when we click on it, it says it has two entry points, so it has two methods that actually call… That invoke this service, this Modified Fulfillment Service. We can also focus in just by hiding those classes that are common, we can really focus in on the business logic here. So there are two of these, and we can focus on that. We can look at the classes that we found during the dynamic analysis. We see that we have 80% exclusivity here. So it means that 80% of these five classes are exclusive to the service. In other words, we have four exclusive classes, which are these. But there is also one non-exclusive class, which is the shipping service, and one intraclass which is a class that was found in the service, but the system recommends that it would be put in a common library. So we can look at this non-exclusive class, which is called the Shipping Service, see where it’s called from. We see that it’s called both on the Modified Fulfillment and the Shipping Price Controller, and we can decide that the shipping service class, it makes sense that it will be exclusive to the Shipping Price Controller.
0:30:14.8 S4: So we can change the boundary of the services. And from the Modified Fulfillment Service, where we call the Shipping Service, the system takes you exactly to the right points, we can create an entry point to the Shipping Service. Clicking on make will add a service-to-service call here, will eliminate the Shipping Service from the Modified Fulfillment Service, and now the system will re-calculate all the metrics. We see that we’re down to four classes, but now with 100% exclusivity. Let’s do the same, but with resources. So, as I mentioned before, the system tracks access to interesting objects. Some of these interesting objects offer access to database tables or queries to database tables. So let’s look at this order line table. So this order line database table is also accessed both from the Modified Fulfillment Controller and from the Order Controller. Let’s say that we want this order line table to be solely called from the Order Controller. So when we extract this service, we also can extract its database with it or these tables with it. So we can click here, see where it’s called from, go up the call stack, up until the class where it actually evoked that query…
0:31:37.5 S4: Sorry, I didn’t mean to do that. From the Order Controller, I’ll go back to the Modified Fulfillment Controller, the order line, and click on here the Modified Fulfillment Controller, and this… So when the Modified Fulfillment Service calls the sales order object that creates the query, here I’ll add a service-to-service call to the order service. Click on make an entry point, the analysis will run in the background, do all the calculations for me. And now I have these calls that I didn’t have before from the Modified Fulfillment Controller. There’s a call to the shipping service and a call to the order controller. And when I look here at the resources, I see that the resource exclusivity went up. And I can look at the resources report and look at all the database tables that are here, and see that now this line chart table and the order line table are exclusive to the order controller. I can focus on all the exclusive classes, so these are all the classes that they’re now exclusive to a specific service. I can look at the non-exclusive ones and see the payment info is still accessed from two different services.
0:32:48.8 S4: So you can play around with this, you can drag and drop these services, there’s a lot you can do here. That’s a very powerful platform. But I’ll jump to a different application and show you some more capabilities. So these are… This application is the MedRec application. It’s a medical records application, that is a sample application that comes with Oracle WebLogic. And I want you to focus here on a specific service, which is the searching patient controller that was found automatically by the system. So here, we see that exclusivity is already 100% for the dynamic classes. But what we see is that there are five dynamic classes. But when we look at the compile-time dependencies, we add the compile-time dependencies, we have 43 of those. So let’s look at these. And we find many exclusive classes, which is fine, but some non exclusive classes, and this is a searching patient controller.
0:33:51.1 S4: But if we notice, we see that there’s also a physician service class that’s here. Which just by the name if you search a patient, you don’t need the physician service. And also the record service and the record facade, so a great… So a large number of classes that are there, but they don’t need to be. So we can click on the details. And it will take us to like our dependency tree. And we see that the record service is there because of the base physician page controller. Clicking on that, we see that the base physician page controller brings in both… The patient service, the physician service and the record service. Looking at, I’ll jump to here, and I’ll show you the actual code. So the base physician base controller really has injected into it these three services, regardless of whether or not they’re needed, because this is like an abstract class and all of the controllers extend this class.
0:34:50.8 S4: So we really need the patient service and we don’t need the physician in the records. So what we can do in the system is mark this record service as dead code. Because we don’t need it. So we’re telling the system, okay, that’s dead code, it’s not required. The analysis runs in the background, same as before. Okay, now it’s done. Higher static exclusivity, if we look at the non exclusive classes, we won’t see anything related to record here. And we can actually do the same with physician. So just mark it as dead code, analysis will run again, and we won’t see it here. We also don’t see the base physician base controller, because now it was refactored automatically. And you see this asterix here. So this was refactored now not to include those records and physician service. And we see these as kind of dead grey dots here.
0:35:54.1 S4: I want to show you the assessment report. So before you get to actually extracting an application. So when you get to this analysis, before actually… When you’re happy with the analysis, before the extraction, the assessment report can show you how difficult or how easy it would be to decompose the application. So this assessment report for this application, which is a simple demo application is one. This is a low complexity application. But this will show you how much work you should expect to spend on lowering this technical debt with… If the class exclusivity is high enough, if the resource exclusivity is high enough, if the topology is simple enough, etcetera.
0:36:38.4 S4: I can then, when I’m happy with this, I’m happy with this architecture, I’m happy with this service, I want to extract it. So I can click on save here, the service creation tab lights up. And then I can just download the service specification file and the swagger file that’s explaining what this… Detailing the API of this service. So I can open up the Yamo file and show you the Yamo file for this service. And I also have an adjacent file that I’ll open it up here because I already copied it here to my terminal. So this is a service pick file, and it shows me what I need in order to make this into a service. So what are the endpoints that I need? What are the resource files that I need? What are the configuration files? So he classes I need, etcetera. So what I’m gonna do is use a tool that comes with a platform that is called code copy. And I’m going to start creating these services. So I’m running the code copy and giving it the common library specification file because we had a common file there like the logger, so we create a common library out of it. So we created a service under the Services Common Library Directory. I’m gonna copy the old POM because… Or a modified POM instead of the one that was generated because it has the parent’s POM.
0:38:32.0 S4: Go into that directory and simply run, may even clean install on this common library and you’ll see that this common library will be just compiled… Just built like that. Let’s go back to the previous directory and do the same with the searching patient controller because I want to show you the extracted service. Again, I’ll copy the modified POM just to save some time, go into the service directory. I’m gonna copy a constant file from the original project. The reason I’m copying a constant file is because we don’t really analyze it because the Java compiler optimizes these classes while it’s compiling the gotten Java code.
0:39:21.3 S4: And I want to show you now the class that we automatically refactored. So here, we see the same base position PageController, but now inside of it injected is only the patient service and not what we wanted. So we still have those imports here. Let me quickly remove these so I can just compile the code ’cause these classes don’t exist anymore. And again, I’ll run the maven clean install and compile this extracted service. So really extracting services and reducing the debt by modularizing your code, extracting services and reducing the complexity shouldn’t be hard with the right tools.
0:40:10.9 S4: I’ll go back to my presentation and show you some more slides. So I showed you a very simple example, right? But these applications are not always simple. We see here a more complex example of a real life application. All of these yellow dots are really classes that we haven’t seen in a dynamic analysis, only compiled and dependencies. So some of them are there because they are optimized out by the JIT compiler or by the Java compiler, that’s why we can’t see them in dynamic analysis. But in every application that we’ve seen, there’s a lot of dead code. And when I’m talking about dead code, it’s not necessarily unreachable code, it’s also a code that… Let’s say you have a class, but that class over time is… Originally was used only for a single use case or for a single domain.
0:41:06.5 S4: But now it’s used for two separate domains, and different methods of that class serve those two different domains. So really what you wanna do is split that class into two, because if you just take that class as is for each one of the services, you’ll bring in with it so many unneeded dependencies. So this platform really tells you how to do that, and it can automatically also find that dead code for you. I wanna show you another more funny bit, but when you take just a little bit more levels of this graph for a more complex application, this is what you get.
0:41:41.8 S4: Okay, so… [chuckle] So going back, I want to just review what we just saw. So we were talking about this modernization technology, and what I showed is really the step one, how to lower your technical debt. I showed you the assessment, how you can assess the complexity of the system and how complex it will be to decompose it into services. When you’re talking about not the first step of lowering that, but maintaining the debt, what you wanna talk about instead of assessment is really to monitor that complexity to make sure… Not just to show you how complex the application, but just to send you an alert when the complexity goes over a certain threshold and to make sure that your debt is low.
0:42:28.3 S4: We were talking… We were showing the decomposition of the system into services. When we’re talking about maintaining a low technical debt, we’re talking about maintaining the modularity and reducing the risk impact. The impact risk or the risk of impact is really the risk when you change a certain class in a certain place to affect a completely different domain. So this is something that you completely want to minimize. And of course, the dead code I showed you, and while lowering the debt, dead code happens all the time. It happens by changes in your application, it happens by changes in the way your clients use your application, so just finding out the dead code also in live application is extremely important in order to lower your technical debt. I’ll turn this now back to you, Bob.
0:43:20.9 S3: Thanks, Amir. I appreciate that. Great demo, very brave and nice walkthrough of all the technology and in a full walkthrough of how the product works in all different phases. And that kinda plays nicely to where we’re going. As an industry, there’s… Along this cloud journey, there’s a lot of different applications for modernization no matter where you are. If you’re on the far left, the broad application estate, and when you’re trying to prioritize which apps to do first, modernize second, etcetera, and how to refactor versus rewrite or maybe we retire, we provide a lot of data-driven analysis that adds to that in a very specific way to help you approach it analytically.
0:44:09.0 S3: You were seeing a lot of activity here where people were just getting started. And people who are in the middle of a modernization, with sometimes very large, called megalithic applications, 10 million lines of code or more, and they’re looking to maybe selectively refactor. Not do the whole app, but just selectively refactor one or two primary services and pull those out, and maybe move those out to the cloud or to a user-facing web service or something that’s more common across the set of different applications going forward.
0:44:41.9 S3: In the third area, we see people who’ve already lift and shifted, and they’re doing the next phase, which is, “I’ve lifted and shifted. I’m up on the cloud now. I’m running. I wanna get the next level of value from the cloud,” and that’s a major… The value point we see. For those large application states, the other use case we’re seeing is people who are building a whole factory. They have one… Typically, they have 10, 50, 100 different applications, and that creates an opportunity to really… Or need to monitor and manage that re-factory process. And people actually manage and monitor that by using our dashboard, and that actually allows you to analyze those applications, tag them for different kinds of modernization plans, and track that process along the way.
0:45:29.4 S3: Okay. So what’s next? You heard from Amir, really the benefits of not only modernization for refactoring an application, but doing this on a continuous basis. And that provides benefits across a broad range of areas, including testing, including detecting that dead code, or preventing the technical debt from overwhelming you. And then, continuing to refactor to prevent the next monolith from developing going forward. So that’s really what’s next in terms of vision. So, summary, and then we’ll get to some Q&A. Really, we want to talk about elite performers, production excellence, and how you move application modernization into this next wave, moving forward. The current approach has been slow, manual, with an overreliance on lift and shift.
0:46:23.2 S3: And really, what we wanna bring are tools around automation, deeper observability, to decompose, build pipelines, build factories, do this on a continuous basis, and shift left for architects, and continue with this process of shifting left into the development phase, and be continuous as we do that. So, resources and next steps: There’s a variety of resources that are out there on our website. There’s a Medrec blog, and a set of use cases around that. We actually have a tutorial on this available on our portal, also. Requires access to. Feel free to reach out to me or Amir, and we look forward to working with you on that. So, it’s probably worthwhile now taking a look at some of the Q&A.
0:47:08.5 S3: And I think we have answered some of those questions already. Thanks, Amir. I’m seeing a… Another question out there around security and where vFunction runs. Is it an on-premise solution? Is it run on a cloud tenancy? And Amir, you wanna talk a little bit about how we operate, and how the security works from the product perspective?
0:47:37.2 S4: Yeah, sure. So first of all, the system analyzes how your code runs, it doesn’t analyze any data that’s run through the application. So we don’t see that, we don’t have access to it. So, with regard to that, you can say that we capture telemetry from your application, same as an APM agent would. Our server is installed on-premise or in your cloud, by you, and maintained by you. So everything is within your firewalls and VPN, so we don’t have access to it. And that’s it, really.
0:48:16.9 S4: The application itself, when you install it, the server’s installed somewhere. It can be installed in a Kubernetes environment or on VM. And the agent that collects the data, is similar to an APM agent. There’s something that runs on the JVM, another process that helps collect the data and send it to the server. And another process that does the static analysis piece. There was another interesting question, Bob, around if we can show an ROI to compare the before and after of the modernization? And I will want to take over and maybe talk a little wider.
0:48:51.3 S3: Sure.
0:48:54.0 S4: So, some of you have large portfolios of applications. Let’s say you have 100 applications. So if you have 100 applications, where do you start from? Which application has the lowest technical fitness or which has the highest technical debt? It’s not a simple question to answer. So vFunction does help you to quickly assess that. What we can show you is, I haven’t showed that in the demo, but we can calculate your real TCO. So there’s, usually, the TCO multiplier, if you think the… When you innovate it, you want to spend $1 on innovation, how many dollars you’re actually spending on that innovation? So, which is usually, Bob showed us that number, it’s usually around $3 to $4 instead of a single dollar, to add functionality in a monolithic application.
0:49:43.8 S4: So that allows you to prioritize. After you prioritize that, you can go through the analysis process. And after the analysis process, the system will also show you what’s the new TCO going to be after you apply that analysis and that modernization process onto the same application. And that can allow you to prioritize further, and also build a business case. I, as a developer, usually hate the code I write, just about after I finish writing it.
0:50:17.3 S4: So I always want to refactor the code, but I usually don’t have the right tools to make the business case. Like, “Do I need to? Will it help? Will it help other parts of the application? How will it reduce my TCO. And what’s the ROI on this refactoring?” So the system does allow you to get all these things, build the right business case, reduce the technical debt, and also maintain it.
0:50:43.0 S3: Gotcha. Great, thanks. Thanks, Amir. Hey, James, I got a question for you on the future of Java, and what’s driving some of this longevity of Java. From the, maybe, faster release cycles of the language itself and the commitment, what’s your sense of the future of Java, and where that’s going?
0:51:02.3 S2: Yeah, I mean I think what’s interesting to me is, just couple of things. One, frankly, the… Java is an engine that enables, it sort of… Java is as innovative in itself has significantly improved. It was not necessarily the outcome that we expected back in the olden days when Oracle acquired Sun, and what… It wasn’t really clear what would happen there, but certainly this movement to a faster cadence with more transparency of road map and not some grandiose plans like, “Oh you’ll get something in five years.” But you’ll get something in six months. I mean, Java itself has responded somewhat to the requirement of more velocity. That’s all to the good. It remains kinda of multivalent in terms of governance. I think that’s good as well. Obviously, the community Java process is less relevant in many respects than it used to be, but certainly seeing that it’s still there, that there are things that people can understand about standards.
0:52:25.9 S2: That continues to be important. For me, I think there are some interesting questions about what… The IDE. Java and Java IDEs was such a thing. If we think about the Eclipse wave, and then latterly the emergence of IntelliJ, as everyone’s favorite IDE, you know, I think one of the questions for me is like modern developers and their comments like, “Will we begin to see Visual Studio code really emerge as a… “
0:53:42.4 S2: So I think Java in this more managed world of higher level services promises to enable us more productivity… But the question is, yeah, How do we get there? And hopefully, that’s what we’ve been talking about, that’s what the demo showed. I think that… I don’t know, I mean, as I say, it seems like every year I hear someone say, Java’s dead. And then I just see it keep on, keep on keeping on. So yeah, language innovation and tooling innovation, and those two things, that continues to be important.
0:54:17.0 S2: I guess the last thing I would just say is, of course, framework innovation, because framework drives language adoption. We’ve continued to see great work out of the Spring Community, another axis of governance really on the benevolent dictator side, and Spring Boot’s a great piece of work. So bottom line, if it’s good enough for Netflix, it’s probably good enough for people that wanna move fast, and Netflix has it all shot. I guess the future of Java is us looking more like Netflix.
0:54:51.4 S3: Gotcha, great. Thanks, James. Yeah. Amir on the… I see another question on Spring and what are the some of the platforms that we support from a JE perspective. And also Spring to Spring Boot, what do we see there? Can you talk a little about maybe our support matrix and what we see out there in customers?
0:55:15.1 S4: Yeah. First of all, we see some old stuff too. So we see stuff starting from Java 6 and we see on old application servers like Web Logic, 11G, and Web-seer A.5 and A.0, so really old stuff, and some of this is already end of life. So people are struggling with this weight of technical debt. We see a lot of JE companies, companies that use JE, and want to continue to use JE. We see a lot of Spring, and that people want to take from a Spring application to Spring Boot. We see Spring running on JE platforms with very little use of EJBs, and so they want to move from an application server to Spring Boot. We basically see everything, and we support all of it. We support really Spring from version 1.0.
0:56:16.5 S4: Although I think we saw 1.0 only once. So 2.0 is actually quite common, 2.5, and so we know how to understand that and help you also to minimize those large Spring configurations that will be minimized only for the specific services and property files. So all of that, and this is actually quite a big mess, and when you try to modernize it yourself and you get to those XMLs, those SSRS XMLs, it becomes an issue. We’d love to help you with that too.
0:56:57.2 S3: Great, yeah. Thanks, Amir. I think that’s a wrap on the questions. I think we’re right down to the end of our time here. I’ll hand it back to Kenya to do final wrap-up, but thank you very much and we appreciate all the time you spent, and look forward to working with you on your application modernizations. Thank you.
0:57:17.4 S1: Awesome, thank you guys. Thank you everybody. And I hope you guys all have a great rest of your day. Thank you.