Evolution of Software Architecture

nemanjko Jul 20, 2016 Architecture


Software architecture has gone through several phases in the last 30 years. Now that we are on the verge of a new seismic shift in the software architecture, let's go through the evolutionary steps of software architecture and perhaps imagine what the future might bring. The focus of the article be on web/mobile applications as they present majority of applications we interact with today. Techniques described here can be applied to client-server/desktop apps as well, with slight modifications.

I would like to start the architecture journey by looking at the scalability cube as defined by AKF Partners. It is a three dimensional scaling space where each axis represents a way to scale an application. The ultimate goal of any architect is to design an application that works as expected under near infinite load. By employing a combination of different architectural styles, architects can navigate through the scalability cube in order to reach the Holy Grail of architecture - the infinite scalability point.

Monolith

Even today, a usual starting point for many applications is N-layered architecture that is packaged and deployed as a single unit. This type of architecture is referred to as a monolith. Architects often start with 3 layers, as shown on the picture: presentation layer (green box), business layer (blue box) and a data layer (orange box). Over time number of layers can increase, but for simplicity reasons we will assume that we are creating a 3-layered architecture.

Monoliths are great to start with. Teams can quickly create an application prototype, package it as one deployment unit and run it in production. Architecture is simple enough to be understandable for all developers. Logical separation of layers is intuitive and enables developers to work in isolation to a certain extent - front end designers, backend developers and database administrators.

As soon as the application leaves prototyping phase and starts being productive, first problems start showing up. Two problems in monolithic applications are particularly annoying:

1. dependency - Teams become dependant on each other and the whole system starts moving with the speed of the slowest team member. As new functions are added, application interdependency start showing up. Changes in one area impact another area. The whole code base enters never-ending refactoring story and the more developers try to solve issues the more code base starts looking like spaghetti. Testing efforts are increased and the whole team starts showing first signs of depression.

2. scalability - Monoliths quickly reach their scalability limit. Even well architected application that has a clean code structure, that uses several layers of caching and that has optimized database structure, will eventually reach its limit when deployed as a monolith. For typical web applications that are deployed as monolith on three servers (one for each layer), I would say that the first scalability problems would show up with 10.000 concurrent users. Many web applications fail under the load way before they reach 10k users, but if developed properly the 10k mark should be achievable (of course depends on the type of the web application).

If the scalability starts becoming an issue, the first thing you can do is to put it on a bigger server (scale up). Bigger boxes with enough resources (Memory, CPU, Bandwith, IOPS) can drive your scalability up to 50k-100k users. But make sure your wallet can scale infinitelly as well since these monster boxes are anything but cheap. If you look at the biggest Amazon instance (x1.32xlarge), you get 1952 GB of memory, 128 CPUs, 4 TB storage, and all that for just $13.338 per hour.

This is the point when you turn back to your architect and ask for another solution.

Distributed Monolith

To solve scalability issues of the monolith, architects usually first start traversing X axis of the scalability cube. Horizontal scaling is achieved by using one or more load balancers on different layers of the architecture. In our 3-layered application, we would first use a load balancer in front of the presentation layer. That would distribute load on several web servers but the application and database servers would still remain as-is.

When the application layer starts falling behind, the cure is to implement another load balancer in front of the business layer. At this level certain changes to the application logic is probably needed. If the presentation and business layers are stateless then load balancers can easily throw users on any of less busy web and application servers. In case any of these layers are stateful then settings on load balancers such as sticky-sessions need to be used.

To further move our architecture towards infinite scalability, a third load balancer can be introduced. The one that distributes load over several database servers. Load balancing databases is not a very common method. The more databases there are in the cluster, the more time-consuming replication jobs between databases need to happen in order to preserve data consistency.

The better way to increase database scalability is with data sharding. That is database partitioning per some logical separation of a domain. You can often see data sharding being implemented in government institutions - one desk accepts applications for people whose initials are from A to D, the next desk takes from E to K and so on. With data sharding the load on databases can be spread among many servers while still keeping consistency and simple transactional model.

All previous scaling scenarios are workarounds for the monolithic architecture. The monolith is still present but fully or partially distributed on more servers.

Distributed monolith is moving our architecture one step closer towards the infinite scalability point. Horizontal scaling and data sharding are traversing our architecture along X and Z axes of the scalability cube. However, without going up on the Y axis we would soon hit the scalability limit with the distributed monolith. It is hard to say to how many users a distributed monolith can support, but the fact is that more you scale out or more you partition your data in shards, the more difficult it becomes to maintain your system. Before we go to the architecture that would push you up on the Y axis, it is important to mention one more concept - SOA.

SOA

As previously mentioned, two main pain points with monoliths are dependency and scalability restrictions. As the scalability was more or less being successfully solved with the distributed monolithic architecture, the dependency still remained as problem. As applications were growing, they started exchanging data with surrounding applications. The overall landscape became complex and overcrowded with peer to peer connections. A new architecture emerged as a possible solution to this problem - Service Oriented Architecture (SOA).

SOA is basically a way to connect different monoliths in a consistent way.

SOA was well received by enterprises. There are SOA opponents who are saying that Enterprise Service Bus (ESB), as the main component of SOA infrastructure, became just another complex monolith. I personally think that any technology can be abused. SOA brings many benefits when implemented rationally and according to the best practices / design patterns.

SOA didn't do anything to the application scalability but the SOA concept introduced an important integration solution that helped applications talk to each other. The traditional implementation of SOA, offered by big vendors in this space, didn't do much favour to the integration architecture style but it paved the way to the more lightweight services.

Microservices

The current architectural mainstream is going into the direction of microservices. You can think of microservices as a collection of small monoliths that are designed to be fully independent but together form a single application. They are independent in a way that they can evolve and be deployed separately. Teams working on microservices communicate with each other via well defined API contracts. Each microservices has their own lifecycle and technology of choice. Reactive messaging patterns are used to create isolation and independency between microservices. Each microservice has its own bounded context and is often using Docker as a way of packaging all dependencies in one deployable unit.

Modern technology companies (Amazon, Netflix, Google, Facebook etc.) are heavily relying on microservice architecture to enable them to push new features to production several times per day. However, it is important not to rush into microservices just because big boys are using them. Take a step back and actually understand why you would need microservices. They can certinaly drive you closer to the infinite scalability point by pushing your architecture up the Y axis. But microservices come with a lot of baggage as well. Some new techniques need to be learned in order to be successful with microservices:
- your teams needs to be using DevOps to the maximum
- tools for orchestration, service discovery and registration need to be mastered
- eventual consistency (of CAP theorem) is something you need to find a solution for in your application (or just to live with it)

In large traditional enterprises the need to use microservices might be less than the industry encourages. It is certainly not mandatory.

Finally, our scalability cube becomes complete. By employing a combination of previous architectures, you can almost find your way to the infinite scalability point and get that Holy Grail of architecture. Almost...

The journey through the architecture styles now meets Cloud. No application can scale infinitely unless the underlying infrastructure can scale infinitely. Luckily, one bookstore came up with the solution several years back. Amazon was the first to open its infrastructure to the public. Soon after, other major players (Microsoft, Google) followed and now we have over 15 cloud providers out there.

Not only has cloud introduced managed services for all of our application and network components (managed databases, DNS, load balancers, storage, computing etc.), it went even further and invented something completely revolutionary - Serverless architecture.

Serverless

Cloud gave us way to provision new servers within seconds and to pay for only what we use. That bookstore then offered something even better.

Two years ago Amazon introduced Lambdas. Not a great name, some suggest that they should have named them as Function-as-a-Service or similar. Lambdas are exactly that - a function that is triggered when a certain event occurs. Events such as receiving a HTTP request, writing something in a database, creation of files etc. As a developer all you have to take care of is to write the body of that Lambda function. Where is that function executed and how it scales, is not your problem anymore. Lambdas can scale from one to billion users. You just pay per miliseconds of Lambdas execution. Other cloud providers have the same concept but at least they named them better. In Google and Azure they are called Functions.

I really think this is a revolutionary approach as it could potentially bring an end to our monolith. Business and data layers are now small interlinked chunks of managed functions and databases. You don't know where and how it is stored/executed and how it scales. It just does. Automatically. (Under the hood, Lambdas/Functions are running in Docker containers)

If you think about it for the moment, many things we used to work before might be gone soon. All those servers we spent night configuring are not our problem anymore. We just write a function and it works somewhere.

The serverless movement is still not the mainstream. Working with Lambdas is still cumbersome (see 3 Reasons AWS Lambda Is Not Ready for Prime Time). There are first frameworks popping up that can help isolate some of the issues with serverless architectures (see ServerLess). It is also hard to emulate serverless environments locally. Developers like to work locally and then deploy their code to the cloud. But that is changing as well. Cloud9 company has a development environment (IDE) in the cloud. It supports over 40 programming languages and can deploy code to Amazon, Azure and Google. I guess there won't be need to work locally anymore. We will be developing, testing and running all in the cloud.

Future?

It's not important whether serverless movement becomes mainstream or not, what is important is that the cloud has established itself as the the innovation force in the software achitecture space. Anything that can be automated and offered as a managed service, will be done by the cloud. All the plumbing work that developers had to do in the past (such as installing and configuring application servers, tuning databases, setting up replication jobs manually etc.) will be taken care of by the cloud.

The near infinite scalability point might not be the Holy Grail anymore. It can become the default starting point offered out of the box.

Whatever the future brings, it will be very exciting. Stay tuned...



Previous Post Next Post