If you’ve been following my blog for awhile, you probably saw my post on using Docker Compose to improve Domino Data Lab‘s development environment (titled “Docker Composing for Fun and Profit”). This post is the sequel.
Over the last few weeks we prioritized closing the loop on these development environment improvements because we had three new developers joining the fray. Historically our machine bootstrapping process has been a disaster. Getting my machine fully up and running when I joined last year took two days, if my memory serves me correctly.. The good news is that it seems like we’ve successfully tamed the beast.
Domino now has a working, containerized development environment and a simple python-based bootstrap script for OS X and Ubuntu to help facilitate setting a new environment up. Together these tools significantly reduce the time it takes to get a working environment down to a few hours or less. Yesterday I actually took these both out for a spin on an AWS server I was setting up “just to see.” Using them I got to a working environment in right at an hour.
Today I want to share a few things with you about what we’re doing to make this successful. But first let’s review where we started and then proceed to where we’re going.
The setup I described in my last post had roughly this structure:
- Container for database, other services.
- Container for holding the application under development.
- On Linux mount the AUD directly. On OS X use rsync to push changes into the Docker VM on your machine.
- Use docker-compose to orchestrate all of these containers.
If you need more detail than that feel free to go through the previous post. 🙂 Now a bit about where we are today.
OS X Volume Mounting with Speed!
I made some pretty important changes to the way I set things up in OS X from my first post. In particular, you may note that in my first post I recommended using rsync to move changes from my host system (OS X) into the VirtualBox container. This hack needed to exist because of the horrible performance of VirtualBox’s own file sharing system.
Since then I found a much easier setup in the form of Dinghy. Dinghy is a wrapper around docker-machine that sets up NFS sharing of the /Users folder between OS X and the VM that Docker is actually running in. NFS is much, much more performant than VirtualBox’s file sharing, and enough so that we can mount our source directly into the container without issue. So things now worked a bit closer to how they did on Linux based systems, modulo one, important issue: some of our containers expected to be able to chown and chmod things as root. Dinghy’s default settings prevent this over NFS.
For most folks these default settings are fine, but occasionally people like us need an escape hatch in order to do what we want to do. I proposed codekitchen/dinghy#170 (which is merged but not yet released) to provide just such an escape hatch and permit overriding the default file sharing settings provided by Dinghy. With that change in place things are mostly smooth in the file sharing department.
That said, as a word of advice: be aware that NFS has some weird things about it. Some applications (such as pip) will blow up if the wrong folder is actually an NFS share. (In our case we had one container where /tmp was mounted from the host. Boom!)
Bare Metal is Sometimes Best
The most important thing we learned was that the container for the Application Under Development was a bad idea for our system.
In our case, our AUD has to do a good bit of I/O on the file system and cares a lot about file ownership and permissions for security reasons. These reasons largely only apply in the production system, but minimizing differences between production and development is a priority for us. It’s also not uncommon for sbt to do some I/O on our behalf that an IDE like IntelliJ will want to inspect in order to provide helpful features like autocompletion.
Anyone who has done a little bit of work in Docker will tell you that user id and group id management between docker containers and the host system is a little bit of a dumpster fire. After a long, winding adventure in trying to make the UIDs/GIDs automatically line up regardless of your actual host system, we took a step back and re-evaluated the consistency (or sanity) of trying to do this in the first place. We landed on the conclusion that, although Docker was really useful for ensuring that the services we needed were set up and configured correctly, the cost of trying to figure out how to get all of the moving pieces we wanted to use (IDEA, various other editors, sbt, tools, etc) working consistently wasn’t worth the benefit. Linux and OS X systems both experienced a good deal of pain around trying to do that.
We concluded that bare metal is sometimes best. With the number of metaphorical coconuts in the air that depend on proper UID/GID/permissions it just made more sense for us to keep the actual application we are building on the host system and use containers for the services it needs.
Bootstrap Scripts Feel Painful, but Do Work
Another part of this project of ours was the bootstrap script for OS X and Ubuntu machines. I don’t have a lot to say about this except that, as the author, it felt like a really burdensome thing to write. I was convinced at various moments that this wouldn’t be worth the time I was spending on it.
Yesterday I set up a brand new Ubuntu environment using that script. It was worth it.
We’ve come a long way since my first blog post, and we’re continuing to improve.
Since my initial work with Dinghy, I’ve started participating in the Docker for Mac Beta, which is based on xhyve virtualization and a custom file sharing layer that is both a) performant and b) not subject to some of the peculiarities of NFS. I think that once this is released to the general public it’ll be a much better experience than what we have today.
As always, I would love feedback on what you thought of this post! Positive feedback is what convinces me to write more frequently, so if you enjoyed this let me know!