This article will help you get a ghost blog set up inside a container. First we’ll containerize the blog and get it working on your local machine and then we’ll deploy it to a single production ubuntu server.

To follow this you’ll need to have docker installed on your local machine (and on your production server when we come to deploying our blog). Check out the official installation instructions for installing Docker here.

Getting it working in development

First lets download the source code of ghost, download it from http://ghost.org/download. Save the zip file to wherever you like (I’ll use /tmp/).

Next we’ll make a new directory for our new blog.

$ mkdir ~/projects/ghost-blog

Next extract the ghost source code into that directory:

$ unzip /tmp/ghost-0.7.0.zip ~/projects/ghost-blog

In fact that’s all we’re going to do for ghost itself. If we were installing ghost “normally” the official installation instructions for ghost say that the next step is to run npm install --production. We’re not going to do that yet though because we’re going to run that inside the container and not on our host. So first we’ll set up the container and come back to the npm install --production later. Let’s set up the docker container to house our blog!

To create a container we’re going to need a Dockerfile in which we’ll specify the environment of the container. In your projects root, alongside where you extracted the ghost source files, create a new file called Dockerfile. Put the following into it:

FROM ubuntu

RUN apt-get update && apt-get install -y \
        npm \
        nodejs-legacy

COPY . /ghost/src/

WORKDIR /ghost/src/

RUN npm install --production

RUN mkdir -p /ghost/content/
VOLUME /ghost/content/

CMD npm start --production

Let’s go through this line by line.

The FROM command says that we want to use the ubuntu image as a base. The experts will tell me “but the ubuntu image probably has many more things on it that you need, it’s very big, it’s overkill”. Well, they’re right, but it’s fine for us since we’re only getting started. As a side note, when you feel the need for quicker builds and faster deploys you can use a more slimmed down image that has only what you need on it. For getting started though the ubuntu image will do an excellent job.

The next line in the Dockerfile is “RUN apt-get …”, this line installs npm. You’ll notice that we’re also installing nodejs-legacy because the ghost application needs to legacy node binaries (more info here).

The “COPY” line simply copies our source files into the container. We’re putting them into /ghost/src/ inside the container.

The “WORKDIR” just says that the next commands should be run while inside the /ghost/src directory.

The “RUN npm install –production” is where we actually install the node dependencies that ghost needs to run. However we’re doing this inside the container.

Next, the RUN mkdir -p /ghost/content/ command just creates the /ghost/content directory if it doesn’t already exist.

The “VOLUME /ghost/content/” line says that we want that directory to be a volume from the host mounted into the container. Since we’ll be using a data container (more on that later) we don’t actually need this line but it might save you losing data if you forget to use a data volume.

Finally the CMD line specifies the command that should be run by default when we run the container. The ghost documentation says that this command starts the ghost application so it goes here.

Now we need to make a few adjustments to the default configuration of ghost. First copy the sample config to a new config:

$ cp config.sample.js config.js

We’ll need to edit the “production” section of the config.js file. Change the url to whatever your production url will be. Also change the filename of the database connection to /ghost/content/ghost.db. Remember that we made “/ghost/content/” a volume earlier? That means that we’ll be able to have a clean separation between the ghost application and our articles and settings, because those go in “/ghost/content/” and will be on the volume. If this is confusing to you just hang in tight, it’ll make more sense to you when we start backing up and restoring backups of our ghost application in a later post.

The final change that we need to make in config.js is to change the interface that the npm server listens on from 127.0.0.1 to 0.0.0.0. Because ghost will be inside a docker container the requests will come from an “external” IP, in fact the requests will appear to come from whatever the IP of the host is inside the container. We need to listen on all interfaces (0.0.0.0) because the host gets assigned a different IP each time the container is run so from the containers perspective the requests could come on any interface.

Now let’s execute all those commands in the Dockerfile by building the image:

$ docker build -t ghost-blog .

You can change the tag name of the image to whatever you want, just change it in the later commands too.

If we were to run this container it would work, but we arent quite finished yet. Like I hinted at earlier, we want to be able to easily backup and restore our ghost data. A common pattern to do this is called the Data Container.

So, before booting up our ghost application, let’s create our data volume that will hold the data from our ghost blog.

$ docker run --daemon --name ghost-blog-data --volume /ghost/content ghost-blog echo "Data for ghost blog"

If you do a quick docker ps --all you’ll see that the container exists but is not running. This is normal for a data container, they don’t need to be running, they just house the data volume. Here’s the output of docker ps --all on my machine:

$ docker ps --all
CONTAINER ID    IMAGE              COMMAND                  CREATED          STATUS                      PORTS       NAMES
3fa1250e6f6f    ghost-blog     "echo 'Data for ghos     1 minute ago     Exited (0) 1 minute ago                     ghost-blog-data

Now that we’ve got our data volume set up correctly the final step is to run the actual blog application container itself and tell it to use the volumes from our newly created data volume container:

$ docker run --daemon --name ghost-blog --ports 80:2368 --volumes-from ghost-blog-data ghost-blog

The --ports option tells docker to map port 80 on the host to port 2368 inside the container. Our ghost application is listening on port 2368 so if you hit http://localhost:80 on the docker host (your laptop or whatever), then docker will route this to the container and ghost will respond. The --volumes-from flag says that we want to use the volumes that are in the previous data container. When we started the ghost-blog-data container we specified that /ghost/content/ is a volume, so now we’ve mounted that same volume into the ghost-blog container.

Visit http://localhost:80 in your browser and play around with it, when you’re ready we’ll put this into production.

Note: You might get an error saying “bind: address already in use” when you try to run the container, that probably means that something else is listening on port 80 on your host. Check if you’ve got apache running. To fix this you have two options: free up port 80 by stopping the other service that is listening on it or map a different port on the host to port 2368 inside the container, for example:

$ docker run --daemon --name ghost-blog --ports 81:2368 --volumes-from ghost-blog-data ghost-blog

Then to see your blog you’d need to hit http://localhost:81 in your browser, and not port 80 like before.

Deploying to Production

So now we have a container that works, how do we get that into production? Here we have two options:

1) We can use a docker registry. We’ll push the image that we created locally to the registry and then pull it from the server and run it as normal. Docker Hub have a free registry for open source containers so if you don’t mind other people being able to look at and use your image then that’s a great option. Otherwise you’ll have to pay for a private registry.

2) We can do a GIT pull in production, build the container from our Dockerfile on the server and then run as normal. Quick and dirty, it works.

For the sake of simplicity, in this tutorial we’ll go with option 2.

On your local machine make sure that you’ve committed and pushed all your changes into one of your VCS repos. Then SSH into the production server and pull down the changes. Then build the container in the same way we built in in dev:

$ cd <checkout of code, where the Dockerfile is>
$ docker build -t ghost-blog .

The last step is to run both the data container and the application container in the same way we did in our local machine with one exception. This time we’re going to specify “–restart always” so that the docker server restarts our blog even if it goes down for some reason.

To start the data container (same as dev):

$ docker run --daemon --name ghost-blog-data --volume /ghost/content ghost-blog echo "Data for ghost blog"

To start the ghost application (same as dev but with “–restart” flag):

$ docker run --daemon --name ghost-blog --ports 80:2368 --volumes-from ghost-blog-data --restart always ghost-blog

That’s it! If you type in the IP of your server into your browser you should see your blog! Hurray!

In an upcoming article we’ll put a reverse proxy in front of our blog.