Easing out the deployment pipeline - Part 1

One of the major components of any infrastructure is how the deployment pipeline is modeled and how it is exposed to the end user, the developer.

The Engineering culture
At Belong, we have always believed in imbibing a culture of end to end responsibility. Not only is the developer responsible for the development of the application but also the deployment and monitoring of the application in production - the entire life cycle.

The primary responsibility of the infrastructure team is to enable this and make the deployment pipeline as steadfast and non-intrusive as possible.

The choice of tools and framework goes a long way in establishing this pipeline. Currently, the DevOps stack consists of:

Terraform - Environment provisioning
Ansible - Configuration management
Strider - Continuous deployment
Datadog - Monitoring
HEK - Logging infrastructure(Heka, ElasticSearch and Kibana)
Docker - Dev environments
Cyclops - An internal tool for extended environment provisioning.

This post is a bare bones introduction on how the tools are deployed to piece the deployment pipeline together. It will also serve as the basis for all posts from the infrastructure team.


The life cycle of an application revolves around the environment on which it is deployed. The more idempotent the components of the environment are, the better is the predictability of the application. While there are quite a few configuration managements systems out there, we chose to go with Ansible - being agentless on a multi cloud setup made much more sense and most of our backend is written in Python, making it easier to tweak Ansible’s Python API should the need arise.

How is Ansible being used?
The different services that run internally (and there are a lot of them) have their own playbooks with the roles being shared - meaning all the playbooks are pushed to the same repository.

Once a server gets provisioned (more on this process in a later blog post), the playbook is run to bring up the environment on the instance. At the end of the run, the instance is ready to serve the requests. Strider is used as the CI tool and all the further code changes are pushed to the instances via Strider.

Extending Ansible - Enter Packer

When we picked up Packer we were looking at it from the perspective of disaster recovery and autoscaling. Packer is a handy tool by the good folks at HashiCorp. It takes in a variety of provisioners (Ansible, Chef, Bash scripts) and outputs multiple artifacts if configured to do so. So an Ansible playbook can be used to build an Amazon AMI for the production setup whilst also creating a Vagrant box for the developers to work on.

How are we using Packer?
All the Ansible playbooks are fed to the Packer configuration to create AMIs which can then be used for a rolling deploy behind a load balancer or as a base for a new instance.

What next?
While Packer does make it simple to ensure idempotent components form the basis of the infrastructure, it does not scale well for us given the nature and frequency of deployments.

We have turned to experimenting with Docker to see how it holds up being at the center of a fast moving deployment pipeline.
We have adopted Docker for the dev environments and intend to push it to production enroute staging. We will be sharing our learnings as we adopt this alternative pipeline.

Subscribe to the blog to get updates from the tech community at Belong.