Docker is perfect to make the developer’s life easier. Thanks to containers, one can engineer the many facilities that make their application into many microservices, dividing their problem into more manageable blocks. For instance, you can trigger a container for a redis database, along with a container fueled by an node.js / express image, and you can have your infrastructure up and running with no hassle.
Docker can prove handy even for optimizing the building pipeline. Indeed, using docker-compose, the Docker orchestration tool, and volumes, you can build your app stage by stage, passing through shared docker data *volumes* the result of any of these build steps to the next one. At the end of the pipe-line, you would have a container, with access to all of the artifacts that have been built so far – via the shared data volumes – launching the very services of your application.
Docker Compose in Action
But let’s see an example in action.
You have two stages of building here:
2. Then, you have to build AND run the *Golang* server, going first through downloading your project’s dependencies (assuming you used Godeps), building and installing the *Golang* service, and running it as a daemon at the startup of your app (containerized or not)
If we want to use containers to approach such a situation, we’d have
- Store these generated front-end assets in a data volume so we can persist and hand them over to the back-end
- And spin off a container running Golang, mount the previously prepared data volume on it, run the dependency fetching, the building and assigning an entry point (that is, the command firing the service upon launch of the container).
As we can proceed to these steps by hand jsut fine, or script them using Shell or whatever, we can use docker-compose, the very useful orchestration tool that comes with the docker toolbox distribution. (Linux users might have to install it by hand) (more…)