As all software developers know, automating tests becomes very important as the size of the project grows and the standards rise. Automatic tests allow us to catch bugs as quickly as possible when developing software, this quick feedback allows us to iterate quickly and fix it quickly. In turn this leads to stuff getting into production quicker (since bugs are found and fixed more quickly) and it means that what does get into production has less bugs (since it passes the tests). In this post we’ll look at running end user tests in a continuous testing workflow for docker applications.
What we’re aiming for
Let’s imagine that we have a docker project which we keep in Github. We have a team of 5 people who work on the code in this project. The team creates feature branches for each task so that the other members in the team can review the code and suggest improvements. As well as the manual code review “test” we want to run three types of tests: unit tests (code based), functional tests (also code based) and some end user tests. While the unit tests and the functional tests are code based tests, in other words they don’t need a webserver to run them, while the end user tests do need a webserver in order to run. Since we’re mad about docker around here, our application already has a Dockerfile which builds an image with the webserver and the code baked into the image.
For each pull request that the team opens we want to run all three types of tests before the pull request is merged.
Setting up the code based tests is not complicated, you can use Jenkins or you can use one of the many CI tools that exist. In this post I’m going to focus on running the end user tests. At the end of the post we’ll have a script that runs the end user tests so we’ll be able to easily run the end user tests on the developers local machines or configure Jenkins to run it on each PR.
End User Tests
To run the tests we will bring up two containers, the first is the application itself and the second is a simple container which runs the end user tests. These end user tests could take the form of simple curl requests or they could be a complex collection of behat tests running in phantomjs. The “how” is not important, what is important is the “what”; the test container should be able to connect to the application container over a url in the same way that a user would.
We’ve put together a simple example of a “test container” which runs a very simple behat test against a url on Github. You can use that as a reference or a base to build your own test container.
Here’s what the flow will look like:
- First we build the docker image from the code
- Then we build the “testing image” which contains the test code
- We’ll use docker-compose to bring up the application and the test container together and run the tests
- Finally we’ll clean everything up
Building The Images
To build the images we can configure Jenkins or use NimbleCI which will build the images quickly and reliably. We won’t go into details of how to set these up since it’s not overly complicated, for now let’s assume that the containers get built.
Disclaimer: I’m the founder of NimbleCI so I’m biased when I say that you should use it. But you should :) It’s very easy to set up and it builds containers fast and reliably.
Running the tests
This step is just about putting together a docker-compose.yml file with all the right settings to both your application and the test container. It might go something like this:
version: '2' services: web: image: emmetog/page-hit-counter links: - "redis:redis" expose: - "5000" depends_on: - "redis" redis: image: redis:3.0.5 tester: image: nimbleci/docker-base-tester-behat links: - "web:web" environment: TARGET: "http://web:5000" depends_on: - web
Basically we just start up the application container as well as any dependencies that it has, and then run the “tester” container which will run the tests against the outward facing port on the application container. If you copy that docker-compose.yml and run
docker-compose up you’ll see in the logs that the web container is started and then a very simple behat test is run against the page-hit-counter application. For your own application you should adapt it to use your own application and tester images.
This docker-compose.yml is an example to explain the concept but you might find it easier to separate the web and the tester image into two different docker-compose files, or use a docker-compose file for the application but run the tester container directly via
docker run ....
Setting up the tests to run continuously just becomes a matter of configuring Jenkins or your other CI solution to bring these services up, run the tests and then clean up when the tests are done.
That’s it! What do you think of this approach? If you have any suggestions or improvements I would love to hear them.