Using Docker Compose to create a Build Pipeline for Web Applications
Docker is perfect to make the developer’s life easier. Thanks to containers, one can engineer the many facilities that make their application into many microservices, dividing their problem into more manageable blocks. For instance, you can trigger a container for a redis database, along with a container fueled by an node.js / express image, and you can have your infrastructure up and running with no hassle.
Docker can prove handy even for optimizing the building pipeline. Indeed, using docker-compose, the Docker orchestration tool, and volumes, you can build your app stage by stage, passing through shared docker data *volumes* the result of any of these build steps to the next one. At the end of the pipe-line, you would have a container, with access to all of the artifacts that have been built so far – via the shared data volumes – launching the very services of your application.
Docker Compose in Action
But let’s see an example in action.
You have two stages of building here:
2. Then, you have to build AND run the *Golang* server, going first through downloading your project’s dependencies (assuming you used Godeps), building and installing the *Golang* service, and running it as a daemon at the startup of your app (containerized or not)
If we want to use containers to approach such a situation, we’d have
- Store these generated front-end assets in a data volume so we can persist and hand them over to the back-end
- And spin off a container running Golang, mount the previously prepared data volume on it, run the dependency fetching, the building and assigning an entry point (that is, the command firing the service upon launch of the container).
As we can proceed to these steps by hand jsut fine, or script them using Shell or whatever, we can use docker-compose, the very useful orchestration tool that comes with the docker toolbox distribution. (Linux users might have to install it by hand)
docker-compose makes it ease to command building of docker-files, creation and mounting of data volumes, and treats the generated images and containers as Services, abstracting away the complexity of managing your containers- and any dependencies between them- by hand.
Our application directory, in the localhost – from which we’ll launch the images/containers creation – structure is as follows:
in the Dockerfile stored under the www/public folder.
COPY . /public
RUN npm install -g grunt-cli && \
npm install -g bower && \
npm install && \
bower –allow-root install && \
RUN mkdir -p /go/src/ric-project/www/public
RUN go get -d -v
RUN go clean && go build && go install -v
Orchestrating using Docker Compose
Now, time to orchestrate the operation using docker-compose. In the docker-compose.yml file in the root of our project, type the following:
As you’ve seen, we tell docker-compose, when launching the service pipeline-server, to begin by launching front-init, then building the image and running a container using it as described by the Dockerfile present at the root of the project.
Then, this volume will be mounted on the container running as part of the pipeline-server service, and specifically on the /go/src/pipeline-sample/www/public in this container.
To launch the pipeline-server service (which will trigger the front end assets building), just run docker-compose like so:
docker-compose up -d pipeline-server
Et voilà !