A Practical Guide To HashiCorp Consul — Part 2
This is part 2 of 2 part series on A Practical Guide to HashiCorp Consul. The previous part was primarily focused on understanding the problems that Consul solves and how it solves them. This part is focused on a practical application of Consul in a real-life example. Let’s get started.
With most of the theory covered in the previous part, let’s move on to Consul’s practical example.
What are we Building?
To show how our web app would scale in this context, we are going to run two instances of Django app. Also, to make this even more interesting, we will run MongoDB as a Replica Set with one primary node and two secondary nodes.
Given we have two instances of Django app, we will need a way to balance a load among those two instances, so we are going to use Fabio, a Consul aware load-balancer, to reach Django app instances.
This example will roughly help us simulate a real-world practical application.
The complete source code for this application is open-sourced and is available on GitHub — pranavcode/consul-demo.
Note: The architecture we are discussing here is not specifically constraint with any of the technologies used to create app or data layers. This example could very well be built using a combination of Ruby on Rails and Postgres, or Node.js and MongoDB, or Laravel and MySQL.
How Does Consul Come into the Picture?
We are deploying both, the app and the data, layers with Docker containers. They are going to be built as services and will talk to each other over HTTP.
Thus, we will use Consul for Service Discovery. This will allow Django servers to find MongoDB Primary node. We are going to use Consul to resolve services via Consul’s DNS interface for this example.
Consul will also help us with the auto-configuration of Fabio as load-balancer to reach instances of our Django app.
We are also using the health-check feature of Consul to monitor the health of each of our instances in the whole infrastructure.
Consul provides a beautiful user interface, as part of its Web UI, out of the box to show all the services on a single dashboard. We will use it to see how our services are laid out.
Setup: MongoDB, Django, Consul, Fabio, and Dockerization
We will keep this as simple and minimal as possible to the extent it fulfills our need for a demonstration.
The MongoDB setup we are targeting is in the form of MongoDB Replica Set. One primary node and two secondary nodes.
The primary node will manage all the write operations and the oplog to maintain the sequence of writes, and replicate the data across secondaries. We are also configuring the secondaries for the read operations. You can learn more about MongoDB Replica Set on their official documentation.
We will call our replication set as ‘consuldemo’.
We will run MongoDB on a standard port 27017 and supply the name of the replica set on the command line using the parameter ‘ — replSet’.
As you may read from the documentation MongoDB also allows configuring replica set name via configuration file with the parameter for replication as below:
In our case, the replication set configuration that we will apply on one of the MongoDB nodes, once all the nodes are up and running is as given below:
This configuration will be applied to one of the pre-defined nodes and MongoDB will decide which node will be primary and secondary.
Note: We are not forcing the set creation with any pre-defined designations on who becomes primary and secondary to allow the dynamism in service discovery. Normally, the nodes would be defined for a specific role.
We are allowing slave reads and reads from the nearest node as a Read Preference.
We will start MongoDB on all nodes with the following command:
This gives us a MongoDB Replica Set with one primary instance and two secondary instances, running and ready to accept connections.
We will discuss containerizing the MongoDB service in the latter part of this article.
We will create a simple Django project that represents Blog application and containerizes it with Docker.
Building the Django app from scratch is beyond the scope of this tutorial, we recommend you to refer to Django’s official documentation to get started with Django project. But, we will still go through some important aspects.
As we need our Django app to talk to MongoDB, we will use a MongoDB connector for Django ORM, Djongo. We will set up our Django settings to use Djongo and connect with our MongoDB. Djongo is pretty straightforward in configuration.
For a local MongoDB installation it would only take two lines of code:
In our case, as we will need to access MongoDB over another container, our config would look like this:
- ENGINE: The database connector to use for Django ORM.
- NAME: Name of the database.
- HOST: Host address that has MongoDB running on it.
- PORT: Which port is your MongoDB listening for requests.
Djongo internally talks to PyMongo and uses MongoClient for executing queries on Mongo. We can also use other MongoDB connectors available for Django to achieve this, like, for instance, django-mongodb-engine or pymongo directly, based on our needs.
Note: We are currently reading and writing via Django to a single MongoDB host, the primary one, but we can configure Djongo to also talk to secondary hosts for read-only operations. That is not in the scope of our discussion. You can refer to Djongo’s official documentation to achieve exactly this.
Continuing our Django app building process, we need to define our models. As we are building a blog-like application, our models would look like this:
We can run a local MongoDB instance and create migrations for these models. Also, register these models into our Django Admin, like so:
We can play with the Entry model’s CRUD operations via Django Admin for this example.
Also, to realize the Django-MongoDB connectivity we will create a custom View and Template that displays information about MongoDB setup and currently connected MongoDB host.
Our Django views look like this:
Our URLs or routes configuration for the app looks like this:
And for the project — the app URLs are included like so:
Our Django template, ‘templates/home.html’ looks like this:
To run the app we need to migrate the database first using the command below:
And also collect all the static assets into static directory:
Now run the Django app with Gunicorn, a WSGI HTTP server, as given below:
This gives us a basic blog-like Django app that connects to MongoDB backend.
We will discuss containerizing this Django web application in the latter part of this article.
We place a Consul agent on every service as part of our Consul setup.
The Consul agent is responsible for service discovery by registering the service on the Consul cluster and also monitors the health of every service instance.
Consul on nodes running MongoDB Replica Set
We will discuss Consul setup in the context of MongoDB Replica Set first — as it solves an interesting problem. At any given point of time, one of the MongoDB instances can either be a Primary or a Secondary.
The Consul agent registering and monitoring our MongoDB instance within a Replica Set has a unique mechanism — dynamically registering and deregistering MongoDB service as a Primary instance or a Secondary instance based on what Replica Set has designated it.
We achieve this dynamism by writing and running a shell script after an interval that toggles the Consul service definition for MongoDB Primary and MongoDB Secondary on the instance node’s Consul Agent.
The service definitions for MongoDB services are stored as JSON files on the Consul’s config directory ‘/etc/config.d’.
Service definition for MongoDB Primary instance:
If you look closely, the service definition allows us to get a DNS entry specific to MongoDB Primary, rather than a generic MongoDB instance. This allows us to send the database writes to a specific MongoDB instance. In the case of Replica Set, the writes are maintained by MongoDB Primary.
Thus, we are able to achieve both service discovery as well as health monitoring for Primary instance of MongoDB.
Similarly, with a slight change the service definition for MongoDB Secondary instance goes like this:
Given all this context, can you think of the way we can dynamically switch these service definitions?
We can identify if the given MongoDB instance is primary or not by running command `db.isMaster()` on MongoDB shell.
The check can we drafted as a shell script as:
Similarly, the non-master or non-primary instances of MongoDB can also be checked against the same command, by checking a `secondary` value:
Note: We are using jq — a lightweight and flexible command-line JSON processor — to process the JSON encoded output of MongoDB shell commands.
One way of writing a script that does this dynamic switch looks like this:
Note: This is an example script, but we can be more creative and optimize the script further.
Once we are done with our service definitions we can run the Consul agent on each MongoDB nodes. To run an agent we will use the following command:
Here, ‘consul_server’ represents the Consul Server running host. Similarly, we can run such agents on each of the other MongoDB instance nodes.
Note: If we have multiple MongoDB instances running on the same host, the service definition would change to reflect the different ports used by each instance to uniquely identify, discover and monitor individual MongoDB instance.
Consul on nodes running Django App
For the Django application, Consul setup will be very simple. We only need to monitor Django app’s port on which Gunicorn is listening for requests.
The Consul service definition would look like this:
Once we have the Consul service definition for Django app in place, we can run the Consul agent sitting on the node Django app is running as a service. To run the Consul agent we would fire the following command:
We are running the Consul cluster with a dedicated Consul server node. The Consul server node can easily host, discover and monitor services running on it, exactly the same way as we did in the above sections for MongoDB and Django app.
To run Consul in server mode and allow agents to connect to it, we will fire the following command on the node that we want to run our Consul server:
There are no services on our Consul server node for now, so there are no service definitions associated with this Consul agent configuration.
We are using the power of Fabio to be auto-configurable and being Consul-aware.
This makes our task of load-balancing the traffic to our Django app instances very easy.
To allow Fabio to auto-detect the services via Consul, one of the ways is to add a tag or update a tag in the service definition with a prefix and a service identifier `urlprefix-/<service>`. Our Consul’s service definition for Django app would now look like this:
In our case, the Django app or service is the only service that will need load-balancing, thus this Consul service definition change completes the requirement on Fabio setup.
Our whole app is going to be deployed as a set of Docker containers. Let’s talk about how we are achieving it in the context of Consul.
Dockerizing MongoDB Replica Set along with Consul Agent
We need to run a Consul agent as described above alongside MongoDB on the same Docker container, so we will need to run a custom ENTRYPOINT on the container to allow running two processes.
Note: This can also be achieved using Docker container level checks in Consul. So, you will be free to run a Consul agent on the host and check across service running in Docker container. Which, will essentially exec into the container to monitor the service.
To achieve this we will use a tool similar to Foreman. It is a lifecycle management tool for physical and virtual servers — including provisioning, monitoring and configuring.
In our case, the Procfile looks like this:
The `consul_check` at the end of the Profile maintains the dynamism between both Primary and Secondary MongoDB node checks, based on who is voted for which role within MongoDB Replica Set.
The shell scripts that are executed by the respective keys on the Procfile are as defined previously in this discussion.
Our Dockerfile, with some additional tools for debug and diagnostics, would look like:
Note: We have used bare Ubuntu 18.04 image here for our purposes, but you can use official MongoDB image and adapt it to run Consul alongside MongoDB or even do Consul checks on Docker container level as mentioned in the official documentation.
Dockerizing Django Web Application along with Consul Agent
We also need to run a Consul agent alongside our Django App on the same Docker container as we had with MongoDB container.
Similarly, we will have the Dockerfile for Django Web Application as we had for our MongoDB containers.
Dockerizing Consul Server
We are maintaining the same flow with Consul server node to run it with custom ENTRYPOINT. It is not a requirement, but we are maintaining a consistent view of different Consul run files.
Also, we are using Ubuntu 18.04 image for the demonstration. You can very well use Consul’s official image for this, that accepts all the custom parameters as are mentioned here.
We are using Compose to run all our Docker containers in a desired, repeatable form.
Our Compose file is written to denote all the aspects that we mentioned above and utilize the power of Docker Compose tool to achieve those in a seamless fashion.
Docker Compose file would look like the one given below:
That brings us to the end of the whole environment setup. We can now run Docker Compose to build and run the containers.
Service Discovery using Consul
When all the services are up and running the Consul Web UI gives us a nice glance at our overall setup.
The MongoDB service is available for Django app to discover by virtue of Consul’s DNS interface.
Django App can now connect MongoDB Primary instance and start writing data to it.
We can use Fabio load-balancer to connect to Django App instance by auto-discovering it via Consul registry using specialized service tags and render the page with all the database connection information we are talking about.
Our load-balancer is sitting at ‘22.214.171.124’ and ‘/web’ is configured to be routed to one of our Django application instances running behind the load-balancer.
As you can see from the auto-detection and configuration of Fabio load-balancer from its UI above, it has weighted the Django Web Application end-points equally. This will help balance the request or traffic load on the Django application instances.
When we visit our Fabio URL ‘126.96.36.199:9999’ and use the source route as ‘/web’ we are routed to one of the Django instances. So, visiting ‘188.8.131.52:9999/web’ gives us the following output.
We are able to restrict Fabio to only load-balance Django app instances by only adding required tags to Consul’s service definitions of Django app services.
This MongoDB Primary instance discovery helps Django app to do database migration and app deployment.
One can explore Consul Web UI to see all the instances of Django web application services.
Similarly, see how MongoDB Replica Set instances are laid out.
Let’s see how Consul helps with health-checking services and discovering only the alive services.
We will stop the current MongoDB Replica Set Primary (‘mongo_2’) container, to see what happens.
Consul has started failing the health-check for previous MongoDB Primary service. MongoDB Replica Set has also detected that the node is down and the re-election of Primary node needs to be done. Thus, getting us a new MongoDB Primary (‘mongo_3’) automatically.
Our checks toggle has kicked-in and swapped the check on ‘mongo_3’ from MongoDB Secondary check to MongoDB Primary check.
When we take a look at the view from the Django app, we see it is now connected to a new MongoDB Primary service (‘mongo_3’).
Let’s see how this plays out when we bring back the stopped MongoDB instance.
Similarly, if we stop the service instances of Django application, Fabio would now be able to detect only a healthy instance and would only route the traffic to that instance.
This is how one can use Consul’s service discovery capability to discover, monitor and health-check services.
Service Configuration using Consul
Currently, we are configuring Django application instances directly either from environment variables set within the containers by Docker Compose and consuming them in Django project settings or by hard-coding the configuration parameters directly.
We can use Consul’s Key/Value store to share configuration across both the instances of Django app.
We can use Consul’s HTTP interface to store key/value pair and retrieve them within the app using the open-source Python client for Consul, called python-consul. You may also use any other Python library that can interact with Consul’s KV store if you want.
Let’s begin by looking at how we can set a key/value pair in Consul using its HTTP interface.
Once we set the KV store we can consume it on Django app instances to configure it with these values.
Let’s install python-consul and add it as a project dependency.
We will need to connect our app to Consul using python-consul.
We can capture and configure our Django app accordingly using the ‘python-consul’ library.
These key/value pair from Consul’s KV store can also be viewed and updated from its Web UI.
The code used as part of this guide for Consul’s service configuration section is available on ‘service-configuration’ branch of pranavcode/consul-demo project.
That is how one can use Consul’s KV store and configure individual services in their architecture with ease.
Service Segmentation using Consul
Connect provides service-to-service connection authorization and encryption using mutual TLS.
To use Consul you need to enable it in the server configuration. Connect needs to be enabled across the Consul cluster for proper functioning of the cluster.
In our context, we can define that the communication is to be TLS identified and secured we will define an upstream sidecar service with a proxy on Django app for its communication with MongoDB Primary instance.
Along with Connect configuration of sidecar proxy, we will also need to run the Connect proxy for Django app as well. This could be achieved by running the following command.
We can add Consul Connect Intentions to create a service graph across all the services and define traffic patterns. We can create intentions as shown below:
Intentions for service graph can also be managed from Consul Web UI.
This defines the service connection restrictions to allow or deny them to talk via Connect.
We have also added ability on Consul agents to denote which datacenters they belong to and be accessible via one or more Consul servers in a given datacenter.
The code used as part of this guide for Consul’s service segmentation section is available on ‘service-segmentation’ branch of velotiotech/consul-demo project.
That is how one can use Consul’s service segmentation feature and configure service level connection access control.
Having an ability to seamlessly control the service mesh that Consul provides makes the life of an operator very easy. We hope you have learned how Consul can be used for service discovery, configuration, and segmentation with its practical implementation.
As usual, we hope it was an informative ride on the journey of Consul. This was the final piece of this two-part series. This part tries to cover most of the aspects of Consul architecture and how it fits into your current project. In case you miss the first part, find it here.
We will continue our endeavors with different technologies and get you the most valuable information that we possibly can in every interaction. Let’s us know what you would like to hear from us more or if you have any questions around the topic, we will be more than happy to answer those.
- Consul Demo that complements this guide
- HashiCorp Consul and its repo on GitHub
- HashiCorp Consul Guides and Code
This post was originally published on Velotio Blog.
Velotio Technologies is an outsourced software product development partner for technology startups and enterprises. We specialize in enterprise B2B and SaaS product development with a focus on artificial intelligence and machine learning, DevOps, and test engineering.