Using Docker Compose to create a Build Pipeline for Web Applications

Docker is perfect to make the developer’s life easier. Thanks to containers, one can engineer the many facilities that make their application into many microservices, dividing their problem into more manageable blocks. For instance, you can trigger a container for a redis database, along with a container fueled by an node.js / express image, and you can have your infrastructure up and running with no hassle.

Docker can prove handy even for optimizing the building pipeline. Indeed, using docker-compose, the Docker orchestration tool, and volumes, you can build your app stage by stage, passing through shared docker data *volumes* the result of any of these build steps to the next one. At the end of the pipe-line, you would have a container, with access to all of the artifacts that have been built so far – via the shared data volumes – launching the very services of your application.

Docker Compose in Action

But let’s see an example in action.

Say you have a web single page app, architected around a Golang back-end, and a modern front-end, managed by some of the fanciest JavaScript asset management tool of the time, so most probably using node.js / npm.

You have two stages of building here:
1. First, you have to build the *JavaScript* artifacts: you have to launch npm, bower, gulp, etc… to stage your front-end assets for production.
2. Then, you have to build AND run the *Golang* server, going first through downloading your project’s dependencies (assuming you used Godeps), building and installing the *Golang* service, and running it as a daemon at the startup of your app (containerized or not)

If we want to use containers to approach such a situation, we’d have

  1. Prepare a container running node.js to install the JavaScript staging dependencies and tools, generate the Production-ready front-end assets
  2. Store these generated front-end assets in a data volume so we can persist and hand them over to the back-end
  3. And spin off a container running Golang, mount the previously prepared data volume on it, run the dependency fetching, the building and assigning an entry point (that is, the command firing the service upon launch of the container).

As we can proceed to these steps by hand jsut fine, or script them using Shell or whatever, we can use docker-compose, the very useful orchestration tool that comes with the docker toolbox distribution. (Linux users might have to install it by hand)

docker-compose makes it ease to command building of docker-files, creation and mounting of data volumes, and treats the generated images and containers as Services, abstracting away the complexity of managing your containers- and any dependencies between them- by hand.

Our application directory, in the localhost – from which we’ll launch the images/containers creation – structure is as follows:

├── Dockerfile
├── docker-compose.yml
├── main.go
├── Godeps
└── www
└── public
├── Dockerfile
├── Gruntfile.js
├── app
├── bower.json
├── img
├── index.html
├── js
└── package.json

The Dockerfile present at the root of our directory will be responsible for building the Golang artifacts, whereas the JavaScript assets building will be handled by a container described
in the Dockerfile stored under the www/public folder.

Let’s first build the Dockerfile responsible for staging the JavaScript assets, the one under the www/public folder:

FROM node:latest
COPY . /public
WORKDIR /public

RUN npm install -g grunt-cli && \
npm install -g bower && \
npm install && \
bower –allow-root install && \
grunt concat

As you’d have noticed, this will build a container, copying the assets from the localhost (the one building the image) to the /public folder, then executing nam to install grunt and bower, import some bower dependencies to finally launch a grunt task to generate the production-ready JavaScript.

With the JavaScript artifacts ready, now we can prepare the back-end part written in Golang:


FROM golang:1.6.2

RUN mkdir -p /go/src/ric-project/www/public

WORKDIR /go/src/pipeline-sample
RUN go get -d -v
RUN go clean && go build && go install -v


This Dockerfile will build a container which fetches dependencies, runs the compilation of main.go and creates the www/public folder in which we are going to mount the data volume containing the JavaScript artifacts prepared by the previously prepared image/container.

Orchestrating using Docker Compose

Now, time to orchestrate the operation using docker-compose. In the docker-compose.yml file in the root of our project, type the following:


version: ‘2’

external: false


build: ./www/public
– front-public:/public

build: .
– “9001:9001”
– front-init
– front-public:/go/src/pipeline-sample/www/public
pipeline-sample start


As you’ve seen, we tell docker-compose, when launching the service pipeline-server, to begin by launching front-init, then building the image and running a container using it as described by the Dockerfile present at the root of the project.

Before front-init container and image are built and run, a volume is created, which is labeled front-public. This volume will be mounted on the container launched as part of the front-init service, and will be hosting the JavaScript artifacts as described by www/public/Dockerfile.

Then, this volume will be mounted on the container running as part of the pipeline-server service, and specifically on the /go/src/pipeline-sample/www/public in this container.

Finally, the command pipeline-sample start is fired, launching the back-end, which can access the JavaScript assets in the volume shared in /go/src/pipeline-sample/www/public.

To launch the pipeline-server service (which will trigger the front end assets building), just run docker-compose like so:

docker-compose up -d pipeline-server

Et voilà !

Leave a Reply

Your email address will not be published / Required fields are marked *