Tips for building a dev-env with docker

The docker logo

Using docker and docker-compose to run a development environment can be a good way to have all your services connected and running together.

I'm going to assume you've got a basic understanding of docker and how it works if not there's a good overview of what docker is here.

We're currently using docker-compose alongside docker to tie together several services for the Mozilla payments development environment.

Having used docker-compose (née fig) on the marketplace development environment. We learnt a lot about what did and didn't work.

Two of the biggest issues we hit were:

  • Dependencies and dep updates
  • Front-end tools

Handling deps offline

Note: I've written about this problem before in detail: Docker and dependencies

If you're running a development environment it's likely you'll be running pip or npm in the Dockerfile. The problem here is that installing all those deps is one layer of the cache. As soon as you update the requirements.txt or package.json that step needs to run again - if you're running the build yourself that can be painful.

I think there's still room for a better solution, but the best way to avoid deps becoming a problem, is to use the hub to build your docker images from a branch in github.

This way you can just pull new images to update your entire environment and they have the latest deps. If you need a new dep, you can just manually install it on the running container (see docker-exec) and when you commit the deps file update the newly built image will have your new dep in it.

Front end toolchain

The next problem that was a big one for us was performance of front-end tooling in a docker container. If you're running anything other than Linux then you'll likely be running boot2docker in virtualbox. The problem with Virtualbox is that you're probably sharing the code into the vm with vboxfs. Unfortunately this doesn't provide the low-level file-change notifications into the vm from the host. So things that we typically use for front-end to watch for file-changes tend to work very slowly and cause virtualbox to eat CPU. From my experience NFS is better from a perf point of view, but still not great.

The trick to solving this one is again to leverage building images on the docker hub. Most of your front-end code is going to just be creating static files. So from your dev-env's perspective you just need to provide the files and serve them from your favorite web server e.g. nginx. The other neat thing is, if you do this right, you're not going to need to remember to commit built files to your tree. \o/

The way we do this is with a data volume container (or data-only container) that just contains the files. For developing locally we do a switcheroo and point that volume to our local source code in the docker-compose.yml.

All the file-watching tools (grunt/gulp and friends) and npm deps are installed and run on the host not in a container.

To have the docker env still work out of the box, we set-up hooks on travis to publish the built files (when the tests have successfully passed) to a different branch and the docker hub builds the image from that.

In summary

When things don't work you have to go back to the drawing board. Sometimes if you feel like you're fighting a tool then you're probably doing it wrong ™.

In our case leveraging the docker hub has made a real difference, Thanks to @AndyMckay for pushing us in that direction.

All in all I think we're starting to think more like we would for using docker in production. Which is not a bad thing, and it would mean that going from having a working development environment to something that could be running in prod would be less of a leap.

I've glossed over a lot of details - so if you'd like to know more take a look at the payments repos or let me know in the comments.

Show Comments