docker

Dockerizing Microservices - FSOSS 2015 Presentation

This year at the Free Software and Open Source Symposium I had the great pleasure of giving a talk with Raffi about our experiences utilizing Docker and Microservice patterns at Cloud Dynamics.

For me it marked a good step since 6 years ago when I moved to Toronto I remember attending FSOSS as a volunteer and at that first event watching the talk from Armen Zambrano's, at that time a fresh graduate from the Software Development BSD program working at the Mozilla's office in California. Since then I got inspired to on day graduate from the same program and be a presenter at FSOSS.

Overall the experience was really positive, I saw some old friends from the college days, spoke with a few professors and chatted with other presenters at the speaker's dinner hosted by Seneca.

Here are the links to the presentation slides and the video recording.

Managing Docker on Ubuntu 15.04 Vivid

There were some big changes introduced on Ubuntu 15.04. Systemd is now the default system manager tool instead of UpStart like in previous Ubuntu releases.
There is a good comparison in the Ubuntu wiki.

So how does that affect Docker?

  1. Configuring Docker
  2. Accessing Logs
  3. Managing the service

Configuring Docker

With UpStart the file /etc/default/docker had to be modified to configure settings for the docker demon. Now with Systemd the file /lib/systemd/system/docker.service needs to be modified instead.

Example
/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine  
Documentation=https://docs.docker.com  
After=network.target docker.socket  
Requires=docker.socket

[Service]
Type=notify  
ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock --bip=172.17.0.1/16 --dns=172.17.0.1  
MountFlags=slave  
LimitNOFILE=1048576  
LimitNPROC=1048576  
LimitCORE=infinity

[Install]
WantedBy=multi-user.target  

To enable the remote API just modify the ExecStart attribute under the service category to ExecStart=/usr/bin/docker daemon -H fd:// -H tcp://0.0.0.0:2375 -H unix:///var/run/docker.sock

Just as a note you might also find the file /etc/systemd/system/multi-user.target.wants/docker.service However, it gets overwritten with the settings in the /lib/systemd/system/docker.service

Note: Make sure you reload the service daemon so the changes can take effect: systemctl daemon-reload

Accessing Logs

With UpStart all the Docker daemon logs were stored at /var/log/upstart/docker.log
Now with Sytemd you need to use journalctl

  • To dump all the logs: journalctl -u docker.service
  • To follow the logs: journalctl -u docker.service -f

Managing the service

With UpStart to manage the Docker service the service tool would allow to get the status, start, stop and restart Docker. To enable Docker to start on boot the update-rc.d tool would manage its settings.

Now with Systemd we need to use the systemctl tools instead

  • Get status of the Docker service: systemctl status docker
  • Enable Docker service to start on boot: systemctl enable docker

Forking DNSDock

When evaluating different tools for enabling docker containers to easily communicate with each other as well as external services, the tool the gave the least resistance path offering a complete set of tools was DNSDock, a DNS server for automatic container service discovery.

In a nutshell, DNSDock relies on the nimble DNS server implementation of miekg in golang. It then hocks to the docker socket and listens for events and dynamically manages DNS entries for all the containers running on a given host.

From the get go we faced a few issues with some tools not being able to resolve names when DNSDock was set as the primary DNS server. The details of the issues is documented here

To solve the issues there were a couple of pull requests submitted by the community members:

However, after waiting more than a month without a response we decided to fork the project and keep track of PRs and updates ourselves.

The project fork can be found at bitbucket.org/clouddynamics/dnsdock
The PRs #45 and #47 have been applied and some project dependencies have been updated which also fixed some other issues when resolving names in certain environments.

A patch for the issue Manage Multiples Nameservers #29 should be coming out very soon as well.

A docker image is also being kept up to date at our docker hub repo

Deploying Nexus Maven Repository on AWS with Docker

Creating docker host

Make sure you create a new key pair or select an existing one before creating the instance.
Also make sure that the security group has all the required rules. If you know you will be accessing the host from a static IP or a single subnet set the source to that IP or IP range, if you are not sure just select Anywhere

Instance settings:

  • Instance type: t2.medium
  • EBS volume: 80 GB SSD
  • Image: CoreOS: CoreOS-stable-723.3.0-hvm
  • VPC: default
  • Subnet: default
  • Security Group: default
  • Security Groups Rules:
    • SSH - 22
    • TCP - 8081
    • ICMP - ALL

Docker host access

To enable remote access, first associate an Elastic IP with the instance.
You can also create a custom domain with DNS Route 53 service but to begin with you can simply add an entry to your /etc/hosts file 52.1.13.202 docker01.aws.cloud

With CoreOS the password authentication is disabled via SSH so make sure to select a key pair when creating the instance.
The root user is disabled for SSH access, use the user core instead.
To make access more convenient you can configure the identity file with a given host.

  1. First copy the key file to ~/.ssh
  2. Edit ~/.ssh/config and add the following
    • Host docker01.aws.cloud
    • IdentityFile ~/.ssh/aws-dev.pem

Now you should be able to access you docker host by typing ssh core@docker01.aws.cloud

Another option is to simply reference the key identity file when accessing the host ssh -i aws-dev.pem core@docker01.aws.cloud

Running nexus container

We decided to use the official sonatype nexus image
The recommended way to run nexus is by keeping its data in a data volume which gives the flexibility of updating nexus without loosing all its data and configuration.

  1. Create data volume docker run -d --name nexus-data sonatype/nexus echo "data-only container for Nexus"
  2. Run nexus container docker run -d -p 8081:8081 --restart=always --name nexus --volumes-from nexus-data sonatype/nexus

You should be able to access your nexus repo at http://docker01.aws.cloud:8081/

Uploading private artifacts

  1. In the Views/Repositories menu, select Repositories
  2. Click in the 3rd party repository
  3. Go to the Artifact Upload tab

Upload artifact

Browse Index

Hardening security

By default nexus is pre-configured with three users:

  • admin - admin123
  • deployment - deployment123
  • anonymous

Make sure you change the admin user password and delete both the deployment and anonymous users.

In some cases having anonymous access enabled might be required, but in that situation you can re-create the anonymous user and re-enable anonymous access with the correct set of roles and privileges. If you don't have a use case to enable anonymous access then simply don't.

To disable anonymous access:

  1. Under the Administration menu, select Server
  2. In the Security Settings group, disabled the check box for Anonymous Access

After anonymous access has been disabled you must configure a new Privilege, Role and User to access your private group.

Creating privilege

Nexus has 3 types of privileges

  • Application privileges - covers actions a user can execute in Nexus,
  • Repository target privileges - governs the level of access a user has to a particular repository or repository target, and
  • Repository view privileges - controls whether a user can view a repository

In our case we are creating a privilege to the Cloud Dynamics group, which has our 3rd party plus public artifacts.

Create privilege

Creating role

A Nexus role is comprised of other Nexus roles and individual Nexus privileges.

Our custom role gives access to the previous created privilege

Create role

Creating user

I suggest you create a user for each developer that needs access to nexus, both from an administrative perspective or simply to download the artifacts required for a given project

Create User

Configuring maven

Now with the nexus repo installed and configured the last step is to update the maven settings to use the nexus repo instead of the default central one.

Update the $M2_HOME/conf/settings.xml file

<?xml version="1.0" encoding="UTF-8"?>  
<settings>  
  <mirrors>
    <mirror>
      <id>cdi</id>
      <mirrorOf>*</mirrorOf>
      <url>http://docker01.aws.cloud:8081/content/groups/cdi</url>
    </mirror>
  </mirrors>
  <servers>
    <server>
      <id>cdi</id>
      <username>diogogmt</username>
     <password>your-password</password>
    </server>
  </servers>
</settings>  

The server section specifies the credentials for your nexus user while the mirror sets the URL maven will look for when downloading the pom dependencies for your projects.
If you don't want to change your global maven settings you can always add the server and mirror configuration to the pom.xml for a given project.

Accessing docker containers from your mac

The most prevalent solution for running docker on the mac is using boot2docker or maybe a CoreOS vagrant VM. In either case the docker0 network is not accessible in the mac since the it is created as a host only network on Virtual Box, usually with the CIDR of 172.17.0.0/16

A simple way to get connectivity to the host only network created for docker is to add a route for the docker's network CIDR with the gateway of the VM's IP

  1. Find the IP of your VM, if you are using docker machine then: $: docker-machine ls
  2. Find the CIDR for you docker private network
    • $: docker-machine ssh yourVmName
    • $: ifconfig docker0 | grep "inet addr" if the netmask is 255.255.0.0 that means it is a /16 network so if the gateway is 172.17.42.1 its CIDR would be 172.17.0.0/16
  3. Create a route for docker0 network: $: sudo route -n add 172.17.0.0/16 192.168.99.100

To confirm that the route was added successfuly you can check the route table entries with netstat, you should see something like this:

netstat -nr  
172.17             192.168.99.100     UGSc            1        4 vboxnet