In this blog post we’ll look at how to provision a swarm cluster on bare metal servers using Ansible. This will let you provision swarm clusters extremely easily, simply by running the ansible playbook.

Note: this post assumes that docker is already installed on each of the servers you are going to provision. For instructions on how to install docker, see the official docs.

Before we jump in, here is the directory structure that you should end up with at the end of the post. While you’re walking through the post feel free to check back here to make sure it’s fitting together nicely.

.
├──inventory/
│   └── pre
│       ├── host_vars
│       │   ├── swarm-1.pre.your.company
│       │   ├── swarm-2.pre.your.company
│       │   ├── swarm-3.pre.your.company
│       │   └── swarm-4.pre.your.company
│       └── pre
├── vendor-roles
│   ├── emmetog.docker-compose
│   ├── emmetog.consul
│   ├── emmetog.swarm-master
│   └── emmetog.swarm-agent
├── ansible.cfg
├── ansible-requirements.yml
└── deploy-docker-cluster.yml

Let’s get started!

The Ansible Inventory

Ansible encourages a conceptual split between “what” is executed and “where” it is executed. The “what” goes in playbooks and the “where” goes in inventories. This makes Ansible very flexible since you can “run anything, anywhere”. We’re going to take advantage of this so we’ll put our swarm cluster servers (the “where”) as inventory and our provisioning scripts (the “what”) will be playbooks.

Let’s imagine that our first environment is called “pre” for pre-production, create the inventory directory as shown in the directory structure above.

We’re going to run the docker cluster on bare metal servers so we’ll need to tell Ansible how to connect to each server.

We can easily configure the settings that apply only to individual hosts using hosts_vars, a good example of a configuration that might only apply to each specific host is the connection details, you might like the idea of being able to use different ssh keys or credentials for each server.

Add this to each swarm-X.pre.your.company file:

# Ansible variables
ansible_ssh_user: ubuntu
ansible_ssh_private_key_file: ssh_keys/pre-aws.pem

# Any other custom host variables that apply only to this host should go here.
hostname: swarm-1.pre.your.company # Change this for each server

In this example we’re using the same private key to access all of the servers but you can see how Ansible is flexible enough to allow us to easily change it per server.

You’ll also notice the hostname, we use this to make sure that the hostname is set correctly on each server, otherwise we might get headaches when it comes to controller placement of our docker services on the swarm using host affinities and constraints.

Notice the hostname variable, it’s very important that the hostname is correct for swarm affinities and constraints so we’ll make sure that the hostname is correct on each node when we provision the swarm.

Next up is the inventory/pre/pre file, fill it with this:

[docker_cluster]
swarm-1.pre.your.company
swarm-2.pre.your.company
swarm-3.pre.your.company
swarm-4.pre.your.company

# This group can only contain one server, it will become the primary
# server in the cluster. Once the cluster is up and functioning then
# it will re-elect its own primary, but this primary is needed for
# bootstrapping.
[docker_cluster_primary]
swarm-1.pre.your.company

[docker_cluster_replicas]
swarm-2.pre.your.company
swarm-3.pre.your.company
swarm-4.pre.your.company

This file groups the individual servers into different groups, we’ll use the groups in the playbooks instead of having to list all of the individual servers.

We’ve finished configuring our inventory!

The Ansible Playbook

Now it’s time to write the playbook that will do the dirty work of bootstrapping the swarm. Create a playbook called deploy-docker-cluster.yml and paste this inside:

---

- hosts:
    - docker_cluster_primary
    - docker_cluster_replicas
  sudo: yes

  roles:
    - emmetog.docker-compose
  tasks:
    - name: Set machine hostname
      hostname: name={{ hostname }}

  # Set up the consul cluster
  # Configure the first instance to bootstrap the cluster
- hosts:
    - docker_cluster_primary
  sudo: yes
  roles:
    - {
        role: "emmetog.consul",
        consul_command: "-server -advertise {{ ansible_default_ipv4['address'] }} -bootstrap-expect {{ groups['docker_cluster']|length }}"
      }

  # Configure consul on the other instances to join the cluster
- hosts:
    - docker_cluster_replicas
  sudo: yes
  roles:
    - {
        role: "emmetog.consul",
        consul_command: "-server -advertise {{ ansible_default_ipv4['address'] }} -join {{ hostvars[groups['docker_cluster_primary'][0]]['ansible_default_ipv4']['address'] }}"
      }

  # Set up the docker swarm on all instances
- hosts:
    - docker_cluster_primary
    - docker_cluster_replicas
  sudo: yes
  roles:
    - swarm-master
    - swarm-agent

In here you’ll notice a few different things. First, we’re using some roles that we haven’t defined yet, we’ll look at those later. Also notice that we’re using the groups that we defined earlier in the inventory.

The playbook is pretty simple really, it just installs docker-compose, ensures that the hostname is correct and then bootstraps a consul cluster. Finally we start the swarm master and swarm agent containers on each node.

The Ansible Roles

All of the roles above are available in ansible galaxy. Create a new file in the root of the project called ansible-requirements.yml with this inside:

- src: emmetog.docker-compose
  path: vendor-roles/
  
- src: emmetog.consul
  path: vendor-roles/
  
- src: emmetog.swarm-master
  path: vendor-roles/
  
- src: emmetog.swarm-agent
  path: vendor-roles/

Pro Tip: It’s a good idea to separate the vendor roles from our roles because it keeps things tidy and organized. For this to work though we’ll need to add the vendor-roles/ directory to the ansible roles path. To do this, create another file in the root of the project called ansible.cfg with this inside:

[defaults]
roles_path = roles:vendor-roles

With this change ansible will look in both roles/ and vendor-roles/ when it looks for the roles.

Now that we have written our ansible-requirements.yml file, we can install the galaxy roles by running this:

$ ansible-galaxy install --role-file ansible-requirements.yml

Take a moment to have a look at the roles to see what they are doing:

Running the playbook

To run the playbook just specify the inventory that you want to run against. For example:

$ ansible-playbook -i inventory/pre/pre deploy-docker-cluster.yml

Wrapping Up

That’s it! Now you have a simple playbook which will bootstrap a fully working docker swarm. To test it, log into any one of the servers and run this:

$ docker -H :4000 info

The “-H :4000” tells docker to talk to the swarm master which is listening on port 4000. If you leave out the “-H :4000” you will be talking to the local docker daemon instead of the swarm.

In the next post I’ll explain how we can use docker-compose to deploy our docker applications to this swarm using Ansible.

What did you think about this post? Feel free to comment below.