Docker version 1.12 was released recently, this post is an overview of the goodies that it brings as well as a breakdown of what each change will mean for you.
- Native swarm commands in engine
- Service Aware
- Self Healing
- Load Balancing (Routing Mesh)
- Rolling Deploys
Native swarm commands in engine
There are new
docker swarm commands built into the engine. These replace the need
for the swarm containers that were used to create a swarm before. Now setting
up a swarm involves running one single command on each node. On the first node run this:
$ docker swarm init --listen-addr <MANAGER-IP>:<PORT>
Where “<MANAGER-IP>” is the internal IP address of this node, the other nodes should be able to reach this node on this IP address. “<PORT>” is the port that this swarm will communicate with the other swarm nodes on. The docker docs use port 2377 but you can use any port or not specify it to use the defaults.
Running that command will give you another command to run on the other nodes, it will look something like this:
$ docker swarm join --secret 4ao565v9jsuogtq5t8s379ulb \ --ca-hash sha256:07ce22bd1a7619f2adc0d63bd110479a170e7c4e69df05b67a1aa2705c88ef09 \ 192.168.99.100:2377
After running that command on the other nodes we’re done, we now have a working swarm!
- To set up a swarm we needed to start swarm master and agent containers on each node
- No need for swarm containers, swarms are created with native
To find out more about how to set up a swarm in 1.12, check out the official docs.
Services are the key piece that has enabled a lot of the other improvements that we’ll see later. The service commands tell swarm the desired state of the service in the cluster, which allows it to intelligently orchestrate the service when nodes or containers die.
The service commands are similar in concept to docker-compose in that they
define the state of the service.
docker-compose had some commands like
but the real drawback was that these commands ran once to ensure the state was correct
but they didn’t monitor things continuously. With the new
docker service command
docker-compose will likely take on a much more “development only” role.
In previous versions of docker we had to find different ways to scale our services and to ensure that certain services have the required amount of replicas. Some used Kubernetes or Amazon EC2 for this, it was also possible to use docker-compose although as we’ve mentioned it only ensured state in the moment that it was run.
- We defined the services using other tools like Kubernetes or the configuration files for Amazon EC2 which took care of container orchestration.
- Using the
docker servicecommands we can tell swarm the desired state of the service which means that docker can natively orchestrate the services.
For more info on how to deploy services, see here.
Swarms are now completely self healing. Any service which goes down will be rescheduled by the docker engine itself, without any other orchestration needed. Previously we needed another external orchestration layer such as Kubernetes to provide this, now it’s built into docker.
Swarm has always been scalable in the sense that we could transparently add or remove a node and the swarm would continue functioning. However although the swarm survived and scaled, it didn’t orchestrate the containers. If a swarm node died then the rest of the cluster wasn’t smart enough to realize that it needed to start new containers on another node to replace those which went down with the dead node. Now in Docker 1.12 the swarm will recognize that the service does not have enough containers to meet it’s scale requirement and it will automatically start new containers on the other nodes.
Although it’s a huge leap forward, it’s still not perfect because if you add new nodes to a swarm there is no way to tell it to redistribute it’s current services across the new nodes too, but expect things like this to improve quickly.
- Although the swarm itself was self healing and transparently scalable, the containers on it were not. We had to manage this manually or use other tools like Kubernetes to make sure a quorum of containers were running on the swarm at any given time.
- The swarm is now aware of the services which should be running on the cluster and will re-schedule when a container or node goes down.
For a great demonstration of how swarm survives node failures and redeploys killed services, check out this official video.
Docker now takes care of all encryption between nodes transparently. It has always been possible to encrypt communication between swarm nodes using TLS but up to now it has been a bit of a pain point because in order to set it up we also needed to have a certificate authority server. Now docker runs a CA server on each node which allows it to enable TLS encryption between nodes by default. Another pain point of setting up the encryption manually was certificate rotation, in Docker 1.12 that is also managed for us transparently by the docker engine.
So now not only is it easier to encrypt all communication between swarm nodes, it is enabled by default!
- Communication between swarm nodes could be encrypted using TLS but required extra setup, if you really wanted to do it right you needed to set up a CA or use a 3rd party provider.
- You needed to manage certificate rotation when certs expired, this caused lots of headaches.
- All inter-node communication is encrypted by default, swarm bundles a CA and manages certs for you, including cert rotation.
For more info on how Docker 1.12 takes care of encryption between nodes, have a look at this official blog post.
Load Balancing (Routing Mesh)
Dockers routing mesh makes networking a whole lot easier. The key concept is that if you publish a port on a service, it is published globally across the entire cluster, so the service can be reached by that port on any node. Also, docker internally load balances the requests to one of the available containers. This means that a request on any node will be transparently load balanced across the replicated containers in your service, even if the containers are spread out on different nodes or if the node where the request came in on is not the same as the node where the container is on. This is really cool and takes a lot of work away from us.
If you need SSL or proxy requests to multiple services (not load balance across one service) then you’ll need to set up a proxy anyway. The big difference is that the proxy won’t have to load balance across the individual containers, just across the services so the proxy will be simpler.
- To load balance between replicated containers we needed to set up a proxy: nginx, haproxy etc, usually coupled with consul using consul-template in order to configure the IPs of the containers dynamically.
- Load balancing out of the box, a port open publically on the swarm is available on all nodes
- No need to know IP addresses anymore, by using service names instead we can let docker handle the internal routing and balancing between replicated containers.
To see the routing mesh in action, check out this demo video.
This is a very cool new feature. Since the docker engine now is aware of the desired
state of the services on the cluster, we can tell it to update the replicas one
by one (or two by two, it’s configurable using the
This means that we can update replicated containers safely and transparently.
For this to go “transparently” you do of course have to make sure that your
containers are backwards compatible, otherwise it might be best to tear off the
bandaid in one go and update all containers at the same time. The point is that
the feature of rolling updates is here though, and it’s a welcome one as there is
no need to script the updating rules and processes to deploy containers transparently.
- Had to manage this manually, either using blue-green deployment or a hand-crafted script to roll update
- New update service commands with rolling changes
The Docker team made a demo video of rolling updates, check it out here.
Docker 1.12 makes managing a docker swarm a whole lot easier and simpler. No need for external orchestration like consul means less moving parts. With these changes Docker is slowly becoming a real alternative to the current big players like Kubernetes who have dominated the scene up to now, we’ll have to wait and see how it pans out.
What do you think of the new features in Docker 1.12? Reach out in the comments below.