Using NATS instead of HTTP for inter service communication

In the new micro service architecture the traditional monolithic application is broken down into multiple small scoped domains. That brings several benefits both from a development perspective since micro services can be iterated individually and also performance since they can be scaled separately.

Now with the move to a micro service architecture one common problem is how to make all the services communicate with one another?
Typically an application exposes a REST API interface to the client so naturally the first option is to start using HTTP.
The question then becomes, is HTTP the best messaging protocol for inter service communication?

Looking at the HTTP's definition from the official RFC

The Hypertext Transfer Protocol (HTTP) is an application-level  
protocol for distributed, collaborative, hypermedia information  
systems. It is a generic, stateless, protocol which can be used for  
many tasks beyond its use for hypertext, such as name servers and  
distributed object management systems, through extension of its  
request methods, error codes and headers.  

It is clear that HTTP wasn't built to provide a communication layer between micro services.

On the contrary, looking at the NATS protocol definition

Open Source. Performant. Simple. Scalable.  
A central nervous system for modern, reliable,  
and scalable cloud and distributed systems.  

It becomes clear that it was built with new modern systems in mind.

One thing to keep in mind is that a
RESTful API could be built on top of NATS across all micro services. The key point is that HTTP is not the best protocol for inter service communication.

Benchmark

In theory it all sounds good, but in practice how much faster is NATS?
To test both approaches a simple benchmark was put together trying to mimic a real world example.

The idea is that a HTTP proxy server will accept client connections and then route the request to the appropriated internal micro service. In one scenario HTTP was used to proxy the messages and on the other NATS.

For the HTTP proxy a node-js express server was used and for the microservice a golang app using the httprouter and the official NATS go-client

To generate the client traffic that would be hitting the proxy the bench-rest tool was used.

Sequential requests with no payload

Overall NATS performed 3x faster than HTTP.
To process 50,000 requests using HTTP as the messaging protocol between the proxy and the micro service it took ~12 mins while using NATS it only took 5.


Sequential requests with 1K payload

When adding some data to the requests the behavior was similar and NATS still processed messages ~3x faster than HTTP.

Parallel requests

When a factor of concurrency was added to the tests that's when NATS performed the best.
To process 50,000 requests with a batch of 2,500 concurrent requests at a time NATS took 1.6 mins while HTTP took 18 mins.

Overall the results shown that NATS beats HTTP both in speed and throughput.

The source code used for the benchmarks can be found here

Vagrant NFS Retryable exception

Last week I hit an odd issue with Vagrant. My environment was working fine when suddenly while resuming my box I started to get the following exception printed in the console:

INFO retryable: Retryable exception raised: #<Vagrant::Errors::LinuxNFSMountFailed: The following SSH command responded with a non-zero exit status.  
    Vagrant assumes that this means the command failed!

After checking VirtualBox I could see my virtual machine was running and I was able to open a shell connection to it, however, the NFS shares were not mounted.

That lead me to a path of trying to figure it out what could be the cause of the problem. In the end I didn't find a clear answer, the closest I got was this answer from Hashimoto: I understand that this is an intermittent issue for people. Iv'e even seen it myself. I'm still not sure what exactly causes this or how to get a solid reproduction. Or, how to fix it. Because of that, I'm going to close it since it is a rare issue. If someone can shed more light, I'd be happy to fix. Thanks.

What I did manage to do was gather a few steps that helped some people solved the Vagrant problems with NFS, they are listed below:

Before trying any of the options below make sure to export the VAGRANT_LOG=DEBUG environment variable to get more details in the console:

export VAGRANT_LOG=DEBUG  

1 - Configure firewall

On the mac, under System Preferences > Security & Privacy - Firewall
Make sure the Automatically allow signed software to receive incomming connections is checked

A more detailed firewall rule list can be found on this github issue

2 - Install vagrant VBGuest plugin

To install the plugin:

vagrant plugin install vagrant-vbguest  

The plugin's github page

3 - Restart vboxnet0 interface

Bring the interface down and up

$ sudo ifconfig vboxnet0 down
$ sudo ifconfig vboxnet0 up

4 - Restart NFSD daemon

First remove all the entries for your vagrant box in the /etc/exports file and then restart the nfsd daemon

sudo nfsd restart  

5 - Install NFS dependencies on the client

In the case of an Ubuntu virtual machine make sure the nfs-common and nfs-kernel-server are installed

apt-get install nfs-common nfs-kernel-server  

Sources

Bash script notes

A couple of points I wanted to track regarding a bash script a had to write recently:

1 - Do and action if the command execution is successful

Assuming you want to delete a docker container only if it exists. You can first try to inspect that container and if the return value is successful you can then delete it.

if docker inspect $CONTAINER_NAME &> /dev/null;  
  then
  echo "Destroying $CONTAINER_NAME container"
  docker rm -f $CONTAINER_NAME
fi  
2 - Use absolute paths when referring to dependent scripts

A common problem with scripts with multiple dependent files is that a certain path must be hardcoded on the script which can then lead to breaking paths depending on where the user executes that script. To avoid any issues on that regards you can use the bash variable BASH_SOURCE

script_dir=$(dirname "${BASH_SOURCE[0]}")  

So if you need to execute a script relative to the one being executed you can refer to:

$script_dir/my-other-script.sh

Instead of just

./my-other-script.sh

Printing Docker Container IP Address

Just wanted to share a smarcut to print a docker container IP address.

Add the following to your ~/.bashrc or ~/.bash_profile

docker-ip() {  
  docker inspect --format '{{ .NetworkSettings.IPAddress }}' "$@"
}

After sourcing the file you can type on your sell docker-ip myContainerName and get back the IP address associated with the container:

$ docker-ip dnsdock
172.17.0.2  

Got the idea from a post on this stackoverflow thread.

Creating and Deploying a private catalog on the Rancher Platform

With the announcement of private catalog support in the Rancher platform we decided to create a custom catalog for our application stack.

If you are looking for a complete overview and step by step on how to create a private catalog the best place is the January online meetup Building a Docker Application Catalog

In the post I'll guide you through the steps I took to create a catalog for our application, the Aytra Cloud.

The main concepts behind a Catalog on Rancher are

  • Expose docker and rancher compose frameworks to define a catalog blueprint
  • Store templates on git repo
  • Allow configuration and deployment of stacks
  • Support versioning and upgrades

Creating the Catalog file structure

So in a nutshell you will need to have a git repo with the following directory structure

In the config.yml file you can define the basic configuration for you catalog, eg;

name: Aytra  
description: |  
  Aytra Cloud enables our customers to fully realize the potential of cloud computing by allowing them to automatically migrate application workloads across public and private clouds to achieve an ideal balance of security, cost, and performance.
version: 7.5  
category: "Cloud Management Platform"  

To define different versions of the same catalog Rancher uses the concept of numbered directories containing different versions of a docker-compose and rancher-compose templates.

One point to note about the rancher-compose file is that a .catalog section must be defined as well as a questions attribute, for example:

.catalog:
  name: Aytra
  version: 7.5
  description: |
    Aytra Cloud enables our customers to fully realize the potential of cloud computing by allowing them to automatically migrate application workloads across public and private clouds to achieve an ideal balance of security, cost, and performance.
  uuid: aytra-0
  questions:
    - variable: "exo-scale"
      description: "Number of exosphere nodes."
      label: "Number of Exosphere Nodes:"
      required: true
      default: 1
      type: "int"

Adding the Catalog to Rancher

Now with all the catalog file structure defined you can add your private git repo to Rancher under the Admin -> Settings tab.

A couple of tweaks you need to make to enable private git repos:

1. Add the username and password of your git user in the git url: https://username:password@bitbucket.org/org/catalogs.git

2. Bind-mount ssh keys for your git repo in the rancher server container. This approach will work after v056 is released since currently the server container doesn't come with with the openssh binaries installed

Ideally you would be able to configure git users similar to how Jenkins git plugins do it but is not supported at the time.

Viewing the Catalog

The rancher catalog service has a mechanism that polls periodically the git repo for new changes to keep the catalogs up to date. So right after adding the git repo to Rancher you should see your catalog listed under the Applications -> Catalogs menu

Launching the Catalog

Now you should be able to launch your application catalog and get a running stack with your micro services