hamburger icon close icon

Monolithic vs. Microservices: A Pragmatic Comparison

December 10, 2019

Topics: Cloud Insights 8 minute read

How do you move from monoliths to microservices—and how do you justify the migration? When do you know you should really make the shift? As you may suspect, the answer to those questions is, “It depends”. There’s not a universally correct way to approach an architectural migration. However, there are plenty of ideas to guide you in the process of deciding which architecture is better for you. This post will discuss the pros and cons of both architectures from a pragmatic point of view.

Monolithic Architecture

A monolithic architecture consists of an application, thought of as a highly coupled system, that runs within a self-sufficient package on a server. Monolithic was the de facto standard architecture decades ago, and, under certain circumstances (discussed later in this article), may still make the most sense today. Some engineers believe that monolithic is the easiest and most natural way of starting the development of an application. When this type of development is done right, it facilitates transitions to other architectures.

Let's discuss some of the pros and cons of the monolithic approach with regard to four fundamental topics in the software engineering world: codebase, deployments, monitoring, and scaling.

Codebase

A single codebase can be beneficial as long as it stays simple and small. If you have a small team and your application has no more than a few components, you may find you don’t need microservices. If the application grows and you start noticing more code merging conflicts, or if your team expands and it becomes a challenge for developers to work independently, you should consider moving away from monoliths. Having a big codebase in any of these scenarios can prove to be a nightmare.

Deployments

When it comes to continuous integration and continuous deployments, monoliths can make life much easier. Since there is only one codebase, there’s no need for coordination with other services or teams in order to get your app pushed. In many cases, writing a linear and a synchronous pipeline would be enough. However, as your codebase grows, build times will increase, dependency errors will become more frequent, and the costs of the underlying infrastructure will go way up. A big disadvantage of monolithic architecture is that, regardless of the codebase size, you will always have to deploy the full application, even for small changes. To minimize the pain here, keep your codebase simple.  

Monitoring

Monitoring a single application is easier than monitoring apps under a microservices-based architecture. Since the application is deployed as a self-sufficient package, all of the requests can be tracked within the same context without the need for aggregating or correlating entries. In addition, setting metrics and health checks is simplified in monolithic architecture, since you don’t need to go beyond the application itself. On the other hand, monitoring a request on a microservice ecosystem may be problematic, since a single request can go through many components and make tracing a challenge.

Scaling

With a monolithic architecture, you will need to scale the complete codebase. The larger it gets, the greater the challenges become because you can’t scale independent components. Keeping the codebase small alleviates this issue, but ultimately may not be compatible with your aspirations for the features and functionality of your applications. The upside of the situation is that latency is reduced because all components are in the same package. This means your applications will respond faster.

AWS Ecosystem for Monolithics

There are many services offered by AWS that allow you to deploy applications in a monolithic fashion (and they can be used for microservices as well). They include:

  • Amazon EC2: With EC2, you can deploy your application within a virtual machine and expose it to the internet, much as you would any bare metal server on-premises. However, deploying your application on EC2 may be an expensive task, since you will be responsible for both the server provisioning and the application configuration. That said, you don’t necessarily need to refactor your application to get it running on an EC2 instance. If your app runs on a bare-metal server, it will run on EC2 as well. Take a look at this guide for getting started on EC2 within the context of a Django application.
  • AWS Elastic BeanStalk: This is the Platform as a Service offering on AWS. If your application is written in one of the supported languages, you can deploy it with Elastic BeanStalk, and AWS will manage the provisioning for you. In this case, you don’t need to worry about scaling, balancing, etc. Here is an example from the Python ecosystem.
  • AWS Fargate: This is another managed service, but one that exists within the container solutions. It allows you to run containers with minimal configuration and zero provisioning. Check out this guide for details.

Staying within managed services is generally preferable because it will allow your engineers to devote their time to developing the application as opposed to maintaining the infrastructure on which it runs.

Microservices Architecture

Microservice-based architectures initially came into play as a solution for monolithic architecture issues: big codebases, merging conflicts, team dependencies. Microservices architectures consist of small and specialized services running independently and collaborating amongst themselves. Usually, responsibilities are split under the domain-driven design paradigm. Instead of having one codebase with all functionality, you get many small apps, each working for only one purpose.

Microservices alleviated many issues with monolithic architectures, but they also introduced new ones.

Codebase

In microservices, codebases are specialized. As you might expect, you will have as many repositories as services in your application. Having different repositories means that teams are working independently, so you can allocate developers’ efforts to different codebases at the same time without causing collisions. As long as the inputs and outputs remain consistent, developers can chop and change their code to their hearts’ content.

Deployments

Because you, effectively, have many small apps working together, you need to write and maintain different pipelines for all of them, which can be very time consuming. Things get even tougher if you have services written in different languages, so it’s good practice to remain consistent where possible. In addition, synchronization is important; the deployment of one service should not interrupt the normal operation of another one that is already running. Take advantage of the best of what your provider or third-party service offers here, as implementing CI/CD pipelines from scratch is a big commitment if you don’t have enough DevOps capacity.

For instance, when possible, use a hosted CI/CD product. There are open-source options and also private services like NetApp CI/CD for the pipeline’s implementation. By using a hosted service, you take the provisioning and maintenance of the underlying hardware/software out of the equation and can often benefit from a high level of integration with other common tool sets. 

Monitoring

The challenges of monitoring microservices is a huge topic in and of itself because debugging and tracing errors is much more complex than it is for monoliths. Because one request could potentially go through multiple different services, it can be difficult to identify the origin and terminus of an issue in the event of failure. If you’re using a managed service, make use of the native monitoring services, but also be sure to explore third-party services as well. Take a look at these open-source options for distributed monitoring.

Scaling

With microservices architectures, you can scale every service independently. Unlike in monolithic architectures, you can virtually provision more servers for a few services at different periods of time according to your traffic peaks, leaving no need to worry about other components with low traffic.

AWS Ecosystem for Microservices

Due to the current popularity of this approach, many cloud providers have increased the offerings that support this architecture. AWS is not an exception. Some of their options include:

  • AWS Fargate: Fargate can also work when it comes to the deployment of independent services. Keep in mind that by using Fargate, you are losing control over the instances the containers are running upon, but you will gain integration with other AWS services by default.
  • Amazon EKS: If you need a more standardized solution for containers, Kubernetes on AWS could be an option. With Amazon EKS, you get covered with the master node, but you still need to work on the worker nodes. This isn’t necessarily pain-free, though you do gain greater independence from the cloud provider. Check out the Amazon EKS tutorial for an introductory overview.
  • AWS Lambda: FaaS can be a good match for deploying microservices. If you keep your services small enough (to the level of functions) and they don’t need to be running all the time, then FaaS could save you money and free you up for server provisioning. You can read about possible uses cases here. Also, this post by Michael Zaczek describes in detail how FaaS and microservices can work together.
  • NetApp NKS: Of course, running workloads in Amazon doesn’t limit you to AWS native services. Third party services such as NetApp NKS streamlines management whilst giving you the power of choice, running on AWS, Azure and GCP, and on-premises environments. NKS is a great option for flexibility and choice with regards to your cloud provider, whilst minimizing any administrative overhead.

Making the Move is Never Easy, But…

Moving a legacy application to microservices is far from straightforward.

Managers need to assess not only the technical aspects of a possible switch, but also the costs for the underlying infrastructure. Microservices will give you more independence and flexibility, but they will also carry with them more technical complexity. A poorly executed migration is likely to leave you with more pain than managing a monolithic application ever caused. The key to success is to gather metrics, insights, and cost information, and discuss them with your team.

Learn More

Principal Technologist

-