Five Things to Consider when Building Cloud Native Applications

Five things you need to know about building cloud native applications

by Hagen - September 2019

Let’s be clear. Getting “into the cloud” is not about spinning up an EC2 instance on AWS, manually installing all the required software and then deploying your software via SFTP/SCP.

This is no different to provisioning a bare metal server somewhere in a rack. You are still faced with the age old problem of owning pets as opposed to cattle.

If this is what you’re doing and want to know what the steps are for tweaking or building your app to be truly cloud on.

But let’s start with the goals. Why have a “cloud native system”? If it had to be summed up in one sentence it would be “in order to get better sleep”.

While there are many benefits to building for the cloud, here are the three big ones as I see them:

  1. When things fail, you can very quickly restore the app with extremely high confidence. No more trying to extract a 20 GB tgz file, only to see “broken pipe”.
  2. Things that worked on local, will work on staging and production. No more “but this worked on staging!?!
  3. You can easily (automatically) scale up or down, depending on resource requirements. No more being permanently overprovisioned just to deal with Black Friday.

What follows are some of the key aspects to consider when developing apps for the cloud.

1) Containerize your app

This is the first and most important step. Putting your app into a container is essentially shrinkwrapping the code, all software required to run it as well as the operating system, into one artefact that can be deployed locally, onto staging and onto production. Doing this with Docker is undoubtedly the industry standard. This goes a long way in addressing the issue of things working on staging and failing in production since the local, staging and prod environments are now identical, even down to the underlying operating system.


2) Avoid persistent disks

The simplest way to understand and implement this is to offload anything that grows or changes into a 3rd party service. Let’s take some examples.

  • Are your users uploading images or files? Don’t store them locally on persistent disks, but offload onto S3, Cloud Storage or (our favourite) Cloudinary and link to them from your app
  • Is your system generating PDFs? Generate them locally and then send them to S3 or Cloud Storage and link directly to them from your app
  • Do you have a database? Don’t manage it yourself, offload this to RDS or Cloud SQL
  • Does your app use sessions? Offload those onto ElastiCache or MemoryStore

The simplest test is, if you destroy the container and bring it up again, will everything work seamlessly as before?

3) Embrace Infrastructure as Code

Having your infrastructure defined in code gives tremendous benefits:

  • You can very easily restore / deploy the infrastructure at any given moment in time
  • The infrastructure setup is now automatically “documented
  • The infrastructure is in version control, giving you oversight into changes over time
  • Making changes is trivial. You make the changes in code and then apply said changes. If it hasn’t worked, you can easily roll this back
  • You have complete visibility into the infrastructure. No hidden scripts or config tweaks that were done by people no longer in your organisation

The real game changer here has been Kubernetes (K8s). While it’s hip and fun to bad mouth YAML (which is typically how one configures K8s), there really is no reason to do so; it’s about as simple as it gets and much prettier than XML.

Kubernetes provides a user friendly wrapper on tremendously complex problems around container orchestration, auto scaling, service discovery, rollbacks and the ability to self heal. 

While some berate K8s as being difficult to learn, in actual fact it is providing a user friendly wrapper on tremendously complex problems around container orchestration, auto scaling, service discovery, rollbacks, ability to self heal etc. 

Those that complain should try solving all these problems without K8s. Now that is hard.


4) Keep it simple

This is a lesson learnt internally. The more complex your cloud setup is, the harder it becomes to debug when things go wrong. A couple of ideas to let you simplify your setup:

  • Offload to 3rd party services where you can: Similar to the above point, why manage your own Database with master, slaves and backups when RDS or Cloud SQL will do it for you?
  • SSL is still a pain: Cert Manager for K8s adds a layer of complexity that is often unnecessary. Instead of deploying Cert Manager, get a free SSL cert from AWS, offload the DNS to Cloudflare or even just purchase one. Avoiding cert manager reduces complexity
  • Choose your Docker Base Image carefully: Instead of starting out with Ubuntu and installing all the packages yourself, opt to base them off pre-packaged, official Docker images. For example, why would you build your image off Ubuntu and install Node packages yourself when you can just build the image on one of the official Node images and have minimal setup to do yourself?

5) Keep it lightweight

We’re big fans of basing our images off of Alpine Linux or choosing official Docker Images based on Alpine. Alpine is a Linux distribution designed for security, simplicity and resource efficiency.

Choosing Alpine has a number of benefits:

  • Quicker build time: Many CI/CD tools charge you per build minute. By having smaller base images, the deploy time can be drastically shortened. An Alpine image will be around 30x smaller than a Debian image! A quicker build and deploy time has a lot of tangible benefits, especially when one is deploying continuously, or heaven forbid, having to hotfix a critical bug that has crept into production.
  • Security built in: Alpine was (and I quote) “designed with security in mind
  • Quicker scale up / down time: If you are auto scaling, then you want your instances to spin up (and down) as quickly as possible so that they can be available to deal with requests as soon as possible. The smaller the file size and the quicker the image is able to build and spin up.

If the above sounds overwhelming, start with the first step. 

Dockerizing your applications will solve many problems that you might not know you had. Your developers can now work on multiple projects, with multiple different dependencies and software requirements on a single machine

Provisioning new machines for developers is also trivial. Install Git, Docker, Docker-Compose, VS Code and you’re done. 

Want them to work on Project X? No problem:

git clone [email protected]:xxx.git && docker-compose up -d

Want them to work Project Y? No problem: 

git clone [email protected]:yyy.git && docker-compose up -d 

Etc. You get the picture.

Once your apps are “Dockerized”, you’ve taken the first and most important step in building for the cloud. Start there.