Deploying all your infrastructure, services, and applications in less than 30 minutes

6 min read >

How to Deploy your Infrastructure in Under 30 Minutes

Engineering Insights

Deploying and configuring your infrastructure takes time. Installing and configuring the services you need takes time. Deploying your applications takes time. Setting up monitoring and logging it all takes a while as well. But it doesn’t have to. In this article, we’re going to tell you how we got a production-ready environment from 0 to everything working before the coffee got cold.

First 10 minutes: create the infrastructure

Terraform is an open-source tool that uses APIs from various cloud providers in order to enable you to write code that describes your infrastructure. This has several advantages:

  • it allows you to write reproducible infrastructure. You can use the same code with different configuration variables in order to deploy different environments (QA, staging, production) and leverage the community-built modules to quickly solve common tasks.
  • you have the current state of your infrastructure, in code. You can also share this code with your coworkers easily, provision it, have a history of the modifications, and, very important for our use case, if it’s code, you can automate it.
  • you can see the plan before you apply it. Terraform saves the state in a file so it can show you what changes you are going to make to the current infrastructure when you hit the apply command.

Here’s a short snippet to get an idea of how things look:

DevOps Blog

What we got after 10 minutes:

  • networking setup
  • the bastion
  • databases
  • a working, auto-scaling, Kubernetes cluster

Next 15 minutes: deploy the services and applications

“Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications.”

Kubernetes does a lot for you. I will list just the main points:

  • self-healing – it will automatically restart failed pods, keeping your applications running smoothly;
  • horizontal scaling – you can easily configure the scaling of pods based on a number of metrics, and if your cluster is living in the cloud, you can scale the number of Kubernetes nodes as well, giving you a lot of room to play with;
  • automated rollouts and rollbacks – Kubernetes progressively rolls out changes to your application or its configuration, while monitoring application health to ensure it doesn’t kill all your instances at the same time. If something goes wrong, Kubernetes will rollback the change for you;
  • efficiency – Kubernetes allows your applications to live on the same node no matter how different their requirements (OS, packages, etc) are, thus allowing you to have fewer physical machines;
  • infrastructure abstraction – by deploying everything in Kubernetes, if you decide to change the provider (from on-premise to cloud, AWS to Google Cloud, etc.) you only need to have another working Kubernetes cluster and everything will just work.

Helm

Called the “Kubernetes package manager”, Helm uses collections (called charts) of Go templates and combines them with values to generate Kubernetes resources (deployment.yaml, service.yaml, etc.) and it deploys them to the Kubernetes cluster.

Here’s why it’s useful:

  • reusing open source components – you can find ready-made charts for a lot of applications, speeding up your deployment and configuration considerably
  • multi-environment – after you define a chart for your application, you can have multiple files with config values for each environment, allowing you to reduce redundancy
  • hooks – you can set hooks pre/post-install, meaning you can automate for example the migrations as well

Bitbucket pipelines

Pipelines are the glue that gets everything together. You can see above how you can deploy your infrastructure, your services, and your applications. With pipelines, you can automate this process.

Here are some useful examples:

  • applying changes to the infrastructure after an adjustment is made in the repository
  • updating the configuration of running services
  • deploying the applications
  • packaging helm charts

What we got after 25 minutes:

  • running services: RabbitMQ, EMQ X, PostgreSQL, etc.
  • logging: ElasticSearch, Fluentd, Kibana
  • monitoring: Prometheus, Grafana
  • running applications: data service, admin module, etc.

Next 5 minutes: enjoy the magic of having automated deployments

Automating the deployment of infrastructure, services, and applications take more than 30 minutes to set up. But after that, every additional environment you create takes very little time and will be, in every way, the same as the existing ones. You won’t need to worry about configuring servers, and applications, checking connections, or testing if things work – it’s all set in code.