In a previous post we looked at how to provision a swarm cluster using Ansible. In this post we will take the automation one step further and learn how to deploy a full application to the cluster using docker-compose and Ansible. In order to get the most out of this post you should already have a docker swarm up and running, but it shouldn’t matter if you’ve provisioned it using Ansible as described in the other blog post or if you’ve provisioned it manually. As long as you have a working swarm the commands will work.

At the end of this post we will have set up an Ansible playbook to deploy a docker application to a swarm, based on a docker-compose definition. We are going to use Ansible to run the docker-compose commands on one of the servers within the swarm itself. It is also possible to control the swarm remotely from any computer outside the swarm, if you take that route be sure to secure it with TLS as explained here. Here we’ll look at the simpler route of closing off the swarm from the public internet completely and only allowing interaction with the swarm after logging in to a node via SSH.

Securing a swarm like this in practice involves using a firewall to block all public facing ports (except the SSH port and any other needed ports) and allowing traffic on all ports internally between the nodes, which allows the components of the swarm to coordinate. The benefit of this is that the swarm is already secured by SSH and we avoid needing to set up a certificate authority server or use a third party one in order to secure the connection between an outside host and the swarm.

With the introduction over, let’s jump in!

Ansible Inventory

As before we need to set up our inventory so that Ansible knows which servers we are managing and how to connect to them. If you haven’t already set up the inventory then go back to the previous post to see how to do it. As a recap, when your inventory is configured you should end up with a file structure that looks like this:

.
├──inventory/
│   └── pre
│       ├── host_vars
│       │   ├── swarm-1.pre.your.company
│       │   ├── swarm-2.pre.your.company
│       │   ├── swarm-3.pre.your.company
│       │   └── swarm-4.pre.your.company
│       └── pre
└── ...

The Ansible Playbook

Now we’re going to write the ansible playbook that will deploy the application to the swarm. Create a playbook called deploy-my-cool-web.yml and paste this in:

- hosts:
    - docker_cluster
  sudo: yes

  pre_tasks:
    - assert: { that: "tag != ''" }

  roles:
      - {
        role: emmetog.docker-compose,
        docker_compose_version: "1.7.1"
     }


  # Pull images manually on all nodes. This is needed if you're
  # using private registries because of a bug in swarm.
  # See https://github.com/docker/swarm/issues/374#issuecomment-156355682
  
- hosts:
    - docker_cluster
  sudo: yes
  tasks:

  - name: Log in to private docker registry
    command: docker login -u {{ docker_registry_username }} -p {{ docker_registry_password }} my-docker-registry.com

    # Manually pull the images bypassing docker-compose and
    # swarm, see https://github.com/docker/swarm/issues/374#issuecomment-156355682
    
  - name: Pull (on all nodes) the my-cool-web image
    command: "docker pull my-docker-registry.com/my-cool-web:{{ tag }}"



  # Run the docker compose commands only on the primary
  # node, the containers will be scheduled by the swarm.
  
- hosts:
    - docker_cluster_primary
  sudo: yes

  tasks:

  - name: Ensure docker-compose directory exists
    file:
      path: /containers/docker-compose/my-cool-web
      state: directory

  - name: Ensure docker-compose file is up to date
    template:
      src: templates/docker-compose/docker-compose-my-cool-web.yml
      dest: "/containers/docker-compose/my-cool-web/docker-compose.yml"

  - name: Run docker-compose up
    command: /usr/local/bin/docker-compose -f /containers/docker-compose/my-cool-web/docker-compose.yml up -d
    environment:
      DOCKER_HOST: ":4000"

So, let’s look at what we’re doing here. First thing you’ll notice is that we pull the images individually on each node before running docker-compose up. This is due to a bug in swarm which breaks pulls from private registries directly from the swarm.

We’re using the emmetog.docker-compose role in this playbook too, if you haven’t installed it yet go back to part 1 of this series and install it.

Next, you’ll notice the {{ tag }} variable, I use this to deploy a certain tag of the service. In the “pre_tasks” section of the playbook we make sure that the user specified this variable using the --extra-vars "tag=my-tag" argument when they ran the playbook. However if you know you are always going to be deploying the same tag then you can remove this and replace all occurrences of the {{ tag }} variable with whichever tag you want.

Speaking of variables, don’t forget to set the {{ docker_registry_username }} and {{ docker_registry_password }} variables, you could either replace them with your hardcoded values or if you’re feeling extra pro you could encrypt them in an ansible vault file.

At the end of the playbook we finally run the docker-compose up command, but notice the DOCKER_HOST environmental variable. This tells docker compose to talk to the swarm instead of the local docker daemon. If you have been following the previous post on how to provision the swarm using ansible then the swarm masters will be listening on port 4000, otherwise change this port to wherever your swarm masters are listening.

The docker-compose.yml template

The last thing to notice in the playbook is that we are copying across a docker-compose.yml file to the node. We’ll need to create that file, in it we will define the “my-cool-web” service and its dependencies. Create a file in templates/docker-compose/docker-compose-my-cool-web.yml with the definition of your service inside. For example, you might have something like this:

redis:
    image: redis:3.0.5

app:
    image: page-hit-counter
    links:
    - "redis:redis"
    expose:
    - "5000"

As an example, that will deploy an instance of the page-hit-counter sample application. Replace the contents of this file with your own services.

After this step you should have this file structure (including files created in part 1):

.
├──inventory/
│   └── pre
│       ├── host_vars
│       │   ├── swarm-1.pre.your.company
│       │   ├── swarm-2.pre.your.company
│       │   ├── swarm-3.pre.your.company
│       │   └── swarm-4.pre.your.company
│       └── pre
├── templates
│   └── docker-compose
│       └── docker-compose-my-cool-web.yml
├── vendor-roles
│   ├── emmetog.docker-compose
│   ├── emmetog.consul
│   ├── emmetog.swarm-master
│   └── emmetog.swarm-agent
├── ansible.cfg
├── ansible-requirements.yml
├── deploy-docker-cluster.yml
└── deploy-my-cool-web.yml

Running The Playbook

Phew! Now that we’ve set everything up it’s time to run the playbook! To deploy your service to the swarm, run this:

$ ansible-playbook -i inventory/pre/pre deploy-my-cool-web.yml --extra-vars "tag=latest"

You can replace “latest” with any tag in your registry to deploy that specific tag.

Wrapping Up

As I said at the beginning of the post, here we’ve taken the approach of logging in to a node via SSH and then connecting to the swarm. Ansible makes this approach very easy for us. However there are drawbacks to the way we’ve implemented things. One obvious one is that we are always using the docker_cluster_primary node to run the docker-compose commands, so this is a single point of failure. A possible improvement might be to use the random ansible filter to pick a random host to use as the primary, if you get that working let me know in the comments.

I hope it’s been interesting, share your thoughts below!