In this post you’ll learn how to use the Jenkins Pipeline plugin to build Docker images continuously.

Here’s what we’ll do in this post: (these links are navigation)

Starting a Jenkins Server

First up you’ll need a jenkins server. Since we’re all into Docker here we’ll run Jenkins locally in a Docker container to demonstrate. If you’ve already got a Jenkins server set up then you can safely skip this step.

$ docker run -d --name jenkins -p 8080:8080 jenkins:2.7.2

In newer versions of Jenkins the admin password is left in the logs, to find it run this:

$ docker logs -f jenkins

Copy the password from the logs, it should appear something like this:

jenkins-password-in-logs-image

Now if we hit 127.0.0.1:8080 in our browser we should see a shiny new Jenkins! Enter the password you copied earlier to start the engines!

When Jenkins asks you which plugins to install, select “Install suggested plugins”.

jenkins-choose-plugins-image

Once all plugins are installed finish the setup by adding your admin user details:

jenkins-setup-admin-user-image

Now you should see the Jenkins homepage with no jobs yet:

jenkins-homepage-no-jobs-image

There are a few more plugins that we’ll need to install before we can get everything working. Go into the plugin manager of Jenkins (“Manage Jenkins” -> “Plugin Manager”) and install these plugins:

  • Cloudbees Docker Pipeline (docker-workflow) - Allows us to use docker commands in the pipelines
  • Amazon EC2 Plugin (ec2) - Allows Jenkins to dynamically provision EC2 slaves

Setting up the Jenkins Job

Now that we have a working Jenkins server, let’s set up the job which will build our Docker images.

Click on “create new jobs”, give your job a name and select the “Pipeline” type of job.

In the job configuration, go down to the “Pipeline” section and choose “Pipeline script from SCM”. This means that the definition of the pipeline (basically the steps to run for the job) will be read from the projects repository. This is really cool because it moves some of the configuration of the job out of Jenkins and into the repository of the project itself.

jenkins-project-pipeline-setup-image

If you need to set up credentials so that Jenkins can check out your code then do that now.

At this point we can run the build and Jenkins will check out the code and try to read the pipeline definition from the Jenkinsfile in the code repository. Since that file doesn’t exist yet, it will fail.

Let’s create the Jenkinsfile now.

Creating the Jenkinsfile

In the root of the project that you’re building, create a new file Jenkinsfile with this inside:

node("docker") {
    docker.withRegistry('<<your-docker-registry>>', '<<your-docker-registry-credentials-id>>') {
    
        git url: "<<your-git-repo-url>>", credentialsId: '<<your-git-credentials-id>>'
    
        sh "git rev-parse HEAD > .git/commit-id"
        def commit_id = readFile('.git/commit-id').trim()
        println commit_id
    
        stage "build"
        def app = docker.build "your-project-name"
    
        stage "publish"
        app.push 'master'
        app.push "${commit_id}"
    }
}

Fill in the placeholders with your own values. You will need to set up the credentials manually in Jenkins and then insert the “credentials id” of the credential set into the Jenkinsfile. In this case you will need two credentials; one to login to the docker registry and one to login to your VCS to checkout the code.

This Jenkinsfile is really simple, it just checks out the codebase, reads the current commit hash from the repo, builds the docker image and then pushes it to the registry. Notice that we are also pushing a tag of the commit id, this is totally optional but allows for some more controlled deploys.

You’ll notice that we specified the “docker” node for this job. That means that this job will only run on a node with the label “docker”. Since we don’t have any nodes with that label, let’s set up a jenkins slave that has docker installed.

Setting up a Jenkins slave in EC2

We’re nearly finished, we have set up the pipeline but if we were to run that job manually it would never complete because there are no nodes available to run it on, the job would keep waiting until a node becomes available.

Earlier we installed the “Amazon EC2 Plugin”, that will allow use to spin up jenkins slaves dynamically. We’ll use that to provision a slave which has docker installed on it, then our docker builds can be run on that node.

Go into the Jenkins configuration (“Manage Jenkins” -> “Configure System”) and set up the “cloud” section with your own details. At the end it should look something like this:

jenkins-cloud-setup-1-image jenkins-cloud-setup-2-image jenkins-cloud-setup-3-image

The most important thing to note here is the label of “docker”, this is what lets our job run on this slave.

Notice also the init script, we just install docker on the slave.

Feel free to tweak any of these settings for your own circumstances, you might want to change the “idle termination time” or the availability zones for example.

Run the job

Now that we have everything set up, let’s run the job! We haven’t yet configured any triggers so go ahead and trigger the job manually.

If all goes well you should see the EC2 slave booting up automatically and then the docker image being built on it.

If you want to set up some more advanced triggers like from a Github hook by all means go ahead, we won’t cover that in this post though.

3rd Party Services for Building Images

Before we finish it’s worth noting that we have a few 3rd party service options when it comes to building the images. It’s up to you to decide if you want to use one of them or do it yourself with Jenkins.

Here are a few of the 3rd party services that can help us build:

[Disclaimer] We’re the team behind NimbleCI.

All of these services can build your images automatically for you, we won’t go into the differences between each one here except to say that NimbleCI is our favourite :)

Further improvements

It is also possible to deploy a Jenkins server including all configuration using Ansible, this has the huge advantage that you can destroy the entire Jenkins server and be able to redeploy it completely, configuration and all, at any moment. Every little bit of configuration can be put in source control, including Jenkins’ internal configs and also the configuration of each job. This makes it much easier to track changes to Jenkins configs and totally removes the need to perform backups of Jenkins. It also completely removes the need to manually configure jenkins through the UI.

In this post you’ll learn how do do exactly that.