Ruby on Rails 5 Dockerfile: 2019 Edition
Most of other posts online about creating a production Docker image for Rails applications seem to be a little basic. Maybe the majority of developers have migrated to other platforms? I needed to dockerize a Rails application for a production deployment, which has a couple of requirements that other guides do not seem to mention:
- It needs to be as small as possible
- It needs to use a production webserver (we use Passenger, but puma would be fine too)
- It needs to compile assets using webpack, and install dependencies using yarn (as Webpacker is being used)
- It needs to be able to pull gems from private github repositories.
I’m still getting used to Docker, I’m not an expert by any means, but I hope my learnings below are useful. I still need to do some work specifically in the configuration, as we are going to be using Kubernetes (and I’ve read it has a tool called configMap), but for now I’m overriding files like config.yml
and database.yml
with versions that are checked-in.
Small as possible
For this requirement, I wanted to use the Alpine distro, as it’s pretty well known in the docker world, and even has official ruby images, which are way more lightweight than Ubuntu.
There’s always the ability to improve further in this area, so I’m open for suggestions.
Production Webserver (Passenger)
Installing passenger was tricky as it doesn’t have any official repositories for Alpine, and simply letting the passenger
rubygem on the Gemfile attempt to install it, fails.
Webpack
This application’s frontend is built using React, Redux, and other modern frontend libraries, and we use the Webpacker gem (which has made working with modern JS frontend in Rails a breeze). Therefore, the docker image needs to install all frontend dependencies and be able to compile the assets.
I started getting some early exits with the status code 137 when trying to compile assets now and then, and eventually found that this was caused by the massive about of working memory that the container requires when compiling assets (it wasn’t strange to see it using 2-3GB of RAM). Increasing the amount of memory available to Docker for Desktop fixed this issue, but I would like to find a way to reduce this footprint in the future (maybe removing Sprockets completely or moving to Webpack 4 would make things better? I don’t know).
Private gems
The issue with pulling gems from a private repository is that you either need an SSH key that can read from that repo, or you need to provide credentials for access via HTTPS, as these are the two strategies that Bundler supports. I discarded the SSH key strategy as that was too complicated (needed to provide it to the image at build time, needed to install an ssh agent), and opted instead for the HTTPS version.
Doing some reading I found that it’s possible to not require to provide username and password but instead an OAUTH token , which could be generated from a shared account for deployment, or any developer could generate their own to build the image locally.
Bundler supports credentials for gem sources natively, so it’s possible to run bundle config GITHUB__COM abc123
with the generated Github token as part of the build. The problem with this approach is that you would be checking the token in the Dockerfile repository, unless you used Docker build-time variables, which would be an improvement, but this would still persist the token inside the image, viewable by running bundle config
.
It is possible to provide an argument to a docker image, which can be used by bundler to authenticate with Github, and not have this token end up in the final image, by using a combination of Docker build-time variables, and Bundler support for credentials via ENV variables. It looks like this:
And to build the image:
Dockerfile
After a couple of days of tweaks to attempt to leverage Docker caching layers as much as possible, this is the end result: