We’re excited to have Nenad Crncec, founder of Architech, writing this week’s blog post. With extensive experience in addressing architectural challenges, Nenad shares valuable insights and highlights how vFunction plays a pivotal role in overcoming common stumbling blocks. Take it away, Nenad!
In my journey through various modernization projects, one theme that consistently emerges is the challenge of managing complexity—whether in microservices and distributed systems or monolithic applications. Complexity can be a significant barrier to innovation, agility, and scalability, impacting an organization’s ability to respond to changing market demands.
Complexity can also come in many forms: Complex interoperability, complex technology implementation (and maintenance), complex processes, etc.…
“Complex” is something we can’t clearly understand – it is unpredictable and unmanageable because of the multifaceted nature and the interaction between components.
Imagine trying to assemble flat-pack furniture without instructions, in the dark, while wearing mittens. That’s complexity for you.
What is complexity in software architecture?
Complexity, in the context of software architecture and system design, refers to the degree of intricacy and interdependence within a system’s components and processes. It encompasses how difficult it is to understand, modify, and maintain the system. Complexity arises from various factors, including the number of elements in the system, the nature of their interactions, the technologies used, and the clarity of the system’s structure and documentation.
Complexity also arises from two additional factors, even more impactful – people and time – but that is for another article.
The double-edged sword of microservices
I recently assisted a company in transitioning from a monolithic architecture to microservices. The promise of microservices—greater flexibility, scalability, and independent deployability—was enticing. Breaking down the application into smaller, autonomous services allowed different teams to work concurrently, accelerating development.
Allegedly.
While this shift offered many benefits, it also led to challenges such as:
- Operational overhead: Managing numerous services required advanced orchestration and monitoring tools. The team had to invest in infrastructure and develop new skill sets to handle containerization, service discovery, and distributed tracing. Devops, SRE’s were spawned as part of agile transformation and a once complex environment…remained complex.
- Complex inter-service communication: Ensuring reliable communication between services added layers of complexity. Network latency, message serialization, and fault tolerance became daily concerns. Add to that communication (or lack thereof) between teams, building those services that need to work together and you have a recipe for disaster. If not managed and governed properly.
- Data consistency issues: Maintaining consistent data across distributed services became a significant concern. Without clear data governance, the simplest of tasks can become epic sagas of “finding and understanding data.”
And then there were the people—each team responsible for their own microservice, each with their own deadlines, priorities, and interpretations of “RESTful APIs.” Time pressures only added to the fun, as stakeholders expected the agility of microservices to translate into instant results.
Despite these challenges, the move to microservices was essential for the company’s growth. However, it was clear that without proper management, the complexity could outweigh the benefits.
The hidden complexities of monolithic applications
On the other hand, monolithic applications, often the backbone of legacy systems, tend to accumulate complexity over time. I recall working with an enterprise where the core application had evolved over years, integrating numerous features and fixes without a cohesive architectural strategy. The result was a massive codebase where components were tightly coupled, making it difficult to implement changes or updates without unintended consequences.
This complexity manifested in several ways:
- Slower development cycles: Even minor changes required extensive testing across the entire application.
- Inflexibility: The application couldn’t easily adapt to new business requirements or technologies.
- High risk of errors: Tightly coupled components increased the likelihood of bugs when making modifications.
But beyond the code, there were people and time at play. Teams had changed over the years, with knowledge lost as developers, business analysts, sysadmins, software architects, engineers, and leaders, moved on. Institutional memory was fading, and documentation was, well, let’s say “aspirational.” Time had turned the once sleek application into a relic, and people—each with their unique coding styles and architectural philosophies—had added layers of complexity that no one fully understood anymore.
Adding people and time to the complexity equation
It’s often said that technology would be simple if it weren’t for people and time. People bring creativity, innovation, and, occasionally, chaos. Time brings evolution, obsolescence, and the ever-looming deadlines that keep us all on our toes.
In both monolithic and microservices environments, people and time contribute significantly to complexity:
- Knowledge silos: As teams change over time, critical knowledge can be lost. New team members may not have the historical context needed to make informed decisions, leading to the reinvention of wheels—and occasionally square ones.
- Diverging priorities: Different stakeholders have different goals, and aligning them is like trying to synchronize watches in a room full of clocks that all think they’re the master timekeeper.
- Technological drift: Over time, technologies evolve, and what was cutting-edge becomes legacy. Keeping systems up-to-date without disrupting operations adds another layer of complexity.
- Cultural differences: Different teams may have varying coding standards, tools, and practices, turning integration into an archaeological expedition.
Addressing complexity with vFunction
Understanding the intricacies of both monolithic and microservices architectures led me to explore tools that could aid in managing and reducing complexity. One such tool is vFunction, an AI-driven architectural observability platform designed to facilitate the decomposition of monolithic applications into microservices and observe behaviour and architecture of distributed systems.
Optimizing microservices architectures
In microservice environments (distributed systems), vFunction plays an important role in deciphering complexity:
- Identifying anti-patterns: The tool detects services that are overly chatty, indicating that they might be too granular or that boundaries were incorrectly drawn. Think of it as a polite way of saying, “Your services need to mind their own business a bit more.”
- Performance enhancement: By visualizing service interactions, we could optimize communication paths and reduce latency. It’s like rerouting traffic to avoid the perpetual construction zone that is Main Street.
- Streamlining dependencies: vFunction helps us clean up unnecessary dependencies, simplifying the architecture. Less is more, especially when “more” equals “more headaches.”
How vFunction helps with monolithic complexity
When dealing with complex monolithic systems, vFunction can:
- Automate analysis: vFunction scans the entire system while running, identifying dependencies and clustering related functionalities. This automated analysis saved countless hours that would have been spent manually tracing code. It was like having a seasoned detective sort through years of code crimes.
- Define service boundaries: The platform suggested logical partitions based on actual usage patterns, helping us determine where natural service boundaries existed. No more debates in meeting rooms resembling philosophical symposiums.
- Prioritizing refactoring efforts: By highlighting the most critical areas for modernization, vFunction allows us to focus on components that would deliver the most significant impact first. It’s amazing how a clear priority list can turn “we’ll never finish” into “we’re making progress.”
Bridging the people and time gap with vFunction
One of the unexpected benefits of using vFunction it its impact on the people side of the equation:
- Knowledge transfer: The visualizations and analyses provided by the tool help bring new team members up to speed faster than you can say “RTFM.”
- Unified understanding: With a common platform, teams have a shared reference point, reducing misunderstandings that usually start with “I thought you meant…”
- Accelerated timelines: By adopting it in the modernization process, vFunction helps us meet tight deadlines without resorting to the classic solution of adding more coffee to the project.
Practical use case and lessons learned
Now that this is said and done, there are real-life lessons that you should take to heart (and brain…)
Does your organisation manage architecture? How are things built, maintained, planned for the future? How does your organisation treat architecture? Is it part of the culture?
Every tool is useless if it is not used.
In the project where we transitioned a large European bank to microservices, using vFunction (post-reengineering) provided teams with fine-tuned architecture insights (see video at the top of this blog). We analyzed both “monolithic” apps and “distributed’ apps with microservices. We identified multi-hop and cyclic calls between services, god classes, dead code, high complexity classes… and much more.
We used initial measurements and created target architecture based on it. vFunction showed us where complexity and coupling lies and how it impacts the architecture.
One major blocker is not treating architecture as a critical artifact in team ownership. Taking care of architecture “later” is like building a house, walls and everything, and deciding later what is the living room, where is the bathroom, and how many doors and windows we need after the fact. That kind of approach will not make a family happy or a home safe.
Personal reflections on using vFunction
“What stands out to me about vFunction is how it brings clarity to complex systems. It’s not just about breaking down applications but understanding them at a fundamental level. This comprehension is crucial for making informed decisions during modernization.”
In both monolithic and microservices environments, the vFunction’s architectural observability provided:
- Visibility: A comprehensive view of the application’s structure and interdependencies.
- Guidance: Actionable insights that informed our architectural strategies.
- Efficiency: Streamlined processes that saved time and resources.
Conclusion: Never modernize again
Complexity in software architecture is inevitable, but it doesn’t have to be an insurmountable obstacle. Whether dealing with the entanglement of a monolith or the distributed nature of microservices, tools like vFunction offer valuable assistance.
By leveraging platforms such as vFunction, organizations can:
- Reduce risk: Make changes with confidence, backed by data-driven insights.
- Enhance agility: Respond more quickly to business needs and technological advancements.
- Promote innovation: Free up resources to focus on new features and improvements rather than wrestling with complexity.
From my experiences, embracing solutions that tackle architectural complexity head-on is essential for successful modernization. And more than that, it is a tool that should help us never modernize again, by continually monitoring architectural debt and drift, helping us to always keep our systems modern and fresh. It’s about empowering teams to understand their systems deeply and make strategic decisions that drive growth.