Docker Build Tips

dockerWith Docker’s rising popularity, many people are building and publishing their own images. It’s easy to get started and build. It feels like going back to the days before configuration management arrived, with lots of messy bash scripting. Unsurprisingly Docker Hub has it’s fair share of poorly written Dockerfiles.

This article is an attempt to structure Dockerfiles better with some tips and keeping in mind how to make them the smallest size possible.


It is important to understand that Docker images are based on layers, every command in the Dockerfile will produce one. The rule of thumb is to create the least number of layers possible and separate the ones that rarely change from those that change frequently.

Structuring the Dockerfile

If your Docker image typically installs a package, adds a couple of files then runs the app, then it’s good to adhere to what I call the “FMERAEEC” method; that’s using the: FROM, MAINTAINER, ENV, RUN, ADD, ENTRYPOINT, EXPOSE & CMD commands. Of course not everything will fit into this.


Use your preferred base image, which one you use is entirely up to you. It is preferable not to use the latest tag as you need to know when you base image changes, to verify that the app still runs ok.
debian:jessie tends to be a popular base image on the Docker Hub. We’ll discuss slim images later on.


Use the RFC compliant format e.g.

MAINTAINER Tom Murphy <>


If you are installing packages, it is a good idea to specify which version of the main package is being installed. e.g.


This layer will frequently change, so the golden rule is to chain up where possible, all the bash commands into a single RUN.
Typically you will get the list of packages, install the package(s) (using the version specified in ENV) then cleanup the list of packages.
Then you may run some configuration change commands such as sed or create symbolic links for the log files to the standard output/error etc… e.g.

Building from source

If you are frequently building your container which requires using something built from source, it is preferable not to build from source. Those builds will take longer to accomplish, plus with the dependencies required, you may end up with a  large sized layer. Instead you may want to have a separate process which builds, packages and stores them somewhere. Then your main app build can fetch and install the packages.


Add your configuration files, artifacts built by your CI/CD etc… Plus your script
Use ADD rather than COPY as ADD allows for additional sources, plus COPY will be deprecated.

A common misconception is the destination path in the container automatically creates any missing parent directories, therefore there’s no need to RUN mkdir commands.


When you run your docker container, the entrypoint script will run first. This is a good place to make some configuration changes to the container, based on any environment variables you pass in at run time, then issue:
exec "$@"  which executes the CMD command.

A classic one many forget, is to add the execute bit to the script. Ensure it’s set in your source control, rather than running an additional RUN command to chmod the file.


Expose the container port(s)  regardless of whether you will use Docker links or bind to the host.


Finally CMD instructs the command (and options such as run in foreground) to run for the container.

It is preferred to put the command and options in the form of:
CMD ["executable","param1","param2"]

Other commands notes

LABEL: many issue a LABEL command for the container description and another LABEL command for the version of the image produced. Unless you have a good reason to use them and that your platform(s) will query the metadata, avoid them. Especially the description label which mostly remains static. The version of the image is what you tag the image. Remember each LABEL command produces another layer.


Many mount a volume to the container so that the host can access the logs and send them somewhere. Whilst this approach works, it is a lot simpler for the container to send logs to the standard and error outputs, then use a logging driver in Docker.

Running the container CMD in foreground should produce at least startup log to the standard output. To get the full logs, redirect them to the outputs, by creating symbolic links in a RUN command.
For example with nginx again:

Ensure that the user has permission to write to the outputs.

Keeping images the smallest size possible

As mentioned earlier, try and keep the number of layers as low as possible. Some other tips are:

  • Remove package lists at the end of the RUN command  rm -rf /var/lib/apt/lists/*
    And optionally any other temporary files under /var/tmp/*  and /tmp/* 
  • If you are downloading archives, remove them after extracting
  • If you are building from source, remove the required packages for building and it’s dependencies
  • Consider building using a slim base image

Do not flatten the image with a docker export unless you have a good reason to. An export does not preserve all the layers, thus can no longer take advantage of cached layers.

Slim Images using Alpine Linux

Whenever possible it’s good to use a slim image to speed up builds and deployments.
The busybox image has been around for a while but recently there has been an increasing trend in adopting the Alpine Linux image.
It’s is a security-oriented, lightweight Linux distribution based on musl libc and busybox It’s only 4.8MB !
Several official images are published to Docker Hub with an alpine slim variant, keep an eye for tags suffixed with -alpine.

Alpine Linux currently has a limited number of packages in it’s package repository. It still has some popular ones: nginx, squid, redis, openjdk7 etc… For comparison openjdk based on Alpine is 100MB which is over 5 times smaller than the Debian (560MB) based image.

Some may have security reservations with slim images versus a full Docker O.S. In my opinion it’s reasonably secure as long as: 1. the upstream image is frequently updated 2. you ensure you always pull the latest base image on all builds and 3. more importantly, ensure that your host O.S. is regularly patched.

Final words

There are many many topics to learn and cover in Docker. Ensure you are well familiar with the Docker build documentation at
Read on Dockerfile best practices
Check how others build their images on Docker Hub, Git Hub etc… learn from them and enhance yours. Try keep your build structure and naming simple and consistent.

The next Docker topic will cover the Docker platform: Watch this space and happy Dockering !

Multiple IPs and ENIs on EC2 in a VPC

aws-logoBack in 2012, Amazon Web Services launched support for multiple private IP addresses for EC2 instances, within a VPC.

This is particularly useful if you host several SSL websites on a single EC2 instance, as each SSL certificate must be hosted on it’s own (private) IP address. Then you can associate the private IP address with an Elastic IP address to make the SSL website accessible from the internet.

Multiple IPs and Limits

This AWS blog entry briefly describe the multiple IPs management:

When you create a VPC, you are by default limited to 5 elastic IP addresses. However it is easy to request for an increase by completing this form

Note that a single Elastic Network Interfaces (ENI) can have multiple secondary IP addresses, for example on a m1.small instance type, you can have up to 4 IPs, which in Linux would be the eth0, eth0:0, eth0:1 and eth0:2 interfaces.

There is also a limit on the number of ENIs and IPs for each instance type, see the documentation at:

Asymmetric Routing

When you add a second ENI, the AWS documentation is missing a fundamental note on how to configure the instance O.S. for handling the network routes.

If you attach the second ENI, associate it with an Elastic IP and bring it up (with ifup) in Linux after adding to /etc/network/interfaces, your network will very likely be performing asymmetric routing. Try and ping the Elastic IP of eth1, you get no response. This is because the response packets leaving the instance do not get sent out via the correct gateway.

Asymmetric routing is explained in depth in this article

Route configuration with additional ENIs

The fix is to add additional routes for the new ENIs. This guide assumes that so far you have followed this documentation for adding a second ENI

We’re assuming the instance has an interface eth0 with the private address from a subnet and we want to add an ENI using a different subnet with an IP address of

The /etc/network/interfaces file should look like this after adding eth1:

Then bring up eth1 interface:

Let’s check the route:

There is one default gateway at (which is bound to VPC the internet gateway) and will route any traffic from eth0. However any traffic from eth1 with a destination outside of will be dropped, so we need to re-configure the routing to the default gateway for the subnet.

Firstly, add an entry “2 eth1_rt” to the route table:

Next we need to add a default route to the gateway for eth1:

Verify that the route is added:

Finally we need to add a rule which will tell the route table to route traffic with a source of via the rt_eth1 table:

Verify that the rule is added:

Now from your machine, try and ping the Elastic IP associated with eth1 and it should now work, asymmetrical routing has been fixed !

To make the route changes permanent so that they can survive a reboot, add them to the interfaces file:

If you wish to associate an private IP from the subnet to eth1 (same subnet as eth0 network), just replace the gateway and subnet values to and respectively.

Systems Orchestration with MCollective


marionetteWhen you have hundreds and thousands of servers, you need to be able to make quick changes to them in one go rather than ssh-ing into every server and executing repetitive commands. This is inefficient and time consuming.

Marionette Collective aka MCollective is a great tool for centralised server orchestration.

Now owned by Puppet Labs, it integrates well with Puppet, but also Chef.

What it can do

MCollective can remotely work with several system components:

  • puppet: manage Puppet agents (run a test, enable / disable, get statistics etc…)
  • package: install, uninstall a package
  • apt: upgrade packages, list number of available upgrades
  • service: start, stop, restart a service
  • nettest: check ping and telnet connectivity
  • filemgr: touch, delete files
  • process: list, kill process
  • nrpe: run nrpe commands (check_load, check_disks, check_swap)
    and more

How it works

Using a message queue, which all the MCollective agents on the servers listen to, the MCollective client (your desktop or management server) can send tasks.
The tasks can only be sent to certain agents thanks to discovery filters which can either be:

  • facts: any fact returned by Facter such as country, OS name or version, domain, ip address, mac address etc…
  • identity: the server’s hostname or fqdn
  • classes: the Puppet classes applied to the server

Filters can be combined and regular expressions can be used as well.

MCollective presentations

Watch an Introduction to Systems Orchestration with MCollective from PuppetConf 2013

Slideshares by the architect of MCollective; R.I.Pienaar

Vagrant MCollective framework
The easiest way to quickly try MCollective is to use the Vagrant MCollective framework at the bottom (just run 2 commands and it builds a Vagrant cluster !).

Installing MCollective

We’ll be installing and configuring MCollective for Ubuntu 12.04 LTS.

Setup Apt Repositories

By default MCollective works with ActiveMQ, however I’d recommend RabbitMQ over AcitveMQ.
To use the latest RabbitMQ packages, use the official RabbitMQ apt as the Ubuntu one is quite old:

We also need to use the PuppetLabs apt to use the latest MCollective packages:

Finally get the packages update:

RabbitMQ Configuration

The RabbitMQ connector uses the STOMP rubygem to connect to RabbitMQ servers.

Install rabbitmq-server:

Enable Stomp and management plugins then restart RMQ:

 Download the rabbitmqadmin script to set some settings:

Create the RMQ user, vhost, permissions and exchanges:

Add the stomp listener to the RabbitMQ config by editing /etc/rabbitmq/rabbitmq.config

Restart RabbitMQ

MCollective Agents Configuration

On any server you wish to orchestrate remotely via MCollective, you must install the mcollective-agent-* packages. Lets start with the package, service and puppet agents:

Edit the MCollective configuration on the agents at /etc/mcollective/server.cfg with the details of the RabbitMQ/Stomp server and authentication details previously set.
Remove the connector and plugin.stomp settings and replace with:

Restart MCollective

MCollective Client Configuration

On your desktop or management server, install the base MCollective and ruby-stomp packages:

Plus the client packages to communicate with the package, service and puppet agents:

Edit the MCollective client configuration at /etc/mcollective/client.cfg with the same settings as server.cfg configured on the agents:

Restart MCollective

Running MCollective

Use mco help to see the available commands. And for help on a mco command run mco  help 

The easiest way to see which servers are discoverable is to run a ping:

Get the status of a package (can can also install/uninstall/update/purge):

Get the status of the ssh service (you can also start/stop/restart):

Execute a Puppet agent run on all nodes with a concurrency of 4:

Using Filters

Before using filters you need to to know the facts and classes on a server:

Identity filter
To run mco with a server identity use:

Class filter
If you have a class apache deployed on the web servers, you can restart apache on just those servers using a class filter:

Fact filter
To update the puppet package on all Ubuntu 12.04 servers using a fact filter:


MCollective is a very useful tool which will save sys admins lots of time. It will help deploy applications and maintain servers a lot quicker.

There are many plugins that can be added to MCollective at

Be sure to checkout the official documentation for MCollective at