In Part 1 of this series of articles, I explained my basic setup to install Ruby on Rails and my preferred tools, and shared the source code on this repository. A collaborator who pulls the source code will have to set up a Ruby environment (possibly using Rbenv, RVM, among other alternatives) and start some services (Postgres, Sass compilation, Redis, etc).

What if I could automate this initial setup? Not only it will provide faster onboarding experience, but it will also proportionate a standardized working environment. This article explains how to configure the application and a database using Docker, while the Part 3 demonstrates how to use Docker Compose to automate the docker scripts and provide a complete developing experience using docker containers.

Welcome, Docker!

As a pragmatic developer, in the last 10 years, I used to deploy Rails apps directly to Heroku. I recently started learning about Zero and it’s being my guide to learn how to build my own infrastructure. Docker was the first step onto this journey. The best documentation to learn it is the official hands-on Getting Started guide: a didactic example project that is fun, realistic and startup-oriented!

Docker Getting Started

Dockerizing my working environment

The first challenge was creating the app’s container image. The officially supported ruby images are listed here. The ruby:slim tag seems to be the best alternative, because running docker scan command reveals the resulting image has a minimal number of vulnerabilities.

FROM ruby:slim

RUN apt update -qq && apt upgrade -y

RUN apt install -y    \
  # https://github.com/rbenv/ruby-build/wiki#suggested-build-environment
  autoconf            \
  bison               \
  build-essential     \
  libssl-dev          \
  libyaml-dev         \
  libreadline6-dev    \
  zlib1g-dev          \
  libncurses5-dev     \
  libffi-dev          \
  libgdbm6            \
  libgdbm-dev         \
  libdb-dev           \
  # Postgres lib to build pg gem
  postgresql-contrib  \
  libpq-dev           \
  # https://classic.yarnpkg.com/lang/en/docs/install/#mac-stable

RUN npm install --global yarn


First things first [1]:

  • apt update updates the package sources to get the latest list of available packages in the repositories
  • apt upgrade updates all the packages presently installed in our Linux system to their latest versions .

The Dockerfile command updates the recommended Ruby dependencies as explained in the first article, in special the PG libraries necessary to build the pg gem; and npm, necessary to install the Yarn package manager, responsible for managing frontend dependencies like the Sass compiler, normalizecss utilities, and so on.

When the image is built for the first time, Docker will need some minutes to download and set up all the system dependencies. Once this initial setup is completed, the dependencies are cached as image layers, and the next builds will complete pretty fast (see below).

% docker build -t demo .
[+] Building 2.5s (10/10) FINISHED
 => [internal] load build definition from Dockerfile                                                                             0.0s
 => => transferring dockerfile: 692B                                                                                             0.0s
 => [internal] load .dockerignore                                                                                                0.0s
 => => transferring context: 2B                                                                                                  0.0s
 => [internal] load metadata for docker.io/library/ruby:slim                                                                     2.2s
 => [auth] library/ruby:pull token for registry-1.docker.io                                                                      0.0s
 => [1/5] FROM docker.io/library/ruby:slim@sha256:a7af546dc28bd3ea3ea86f5e84ec2e394413b8dc6a442d5d47ee50efe507c968               0.0s
 => CACHED [2/5] RUN apt update -qq && apt upgrade -y                                                                            0.0s
 => CACHED [3/5] RUN apt install -y      autoconf              bison                 build-essential       libssl-dev            0.0s
 => CACHED [4/5] RUN npm install --global yarn                                                                                   0.0s
 => CACHED [5/5] WORKDIR /src                                                                                                    0.0s
 => exporting to image                                                                                                           0.0s
 => => exporting layers                                                                                                          0.0s
 => => writing image sha256:a11ebe7b958336e0de3cddc396e4581c0e83331a64bc6d6240164c06f0d029dd                                     0.0s
 => => naming to docker.io/library/demo

Keep in mind this Layer concept while writing the Dockerfile, to design it in a way the layers can be efficiently built and cached. As advised in the Docker Getting Started guide, after the image is built it’s recommended to use the command docker scan to let Snyk analyze the Security issues in the resulting image:

% docker scan demo

Testing demo...

Package manager:   deb
Project name:      docker-image|demo
Docker image:      demo
Platform:          linux/amd64
Base image:        ruby:3.0.2-slim-bullseye

Tested 639 dependencies for known vulnerabilities, found 117 vulnerabilities.

According to our scan, you are currently using the most secure version of the selected base image

For more free scans that keep your images secure, sign up to Snyk at https://dockr.ly/3ePqVcp

The last pieces of the Dockerfile define the container work directory will be mounted on /src and documents the port 3000 (default Puma server port) will be exposed.


At this point, it’s already possible to start the container. By default, it initializes a rails console into the container, as defined for the ruby:slim docker image.

% docker run -it --rm demo

The -it flags are necessary to open an interactive terminal.

% docker run --help
-i, --interactive      Keep STDIN open even if not attached
--rm                   Automatically remove the container when it exits
-t, --tty              Allocate a pseudo-TTY

It’s possible to override the default command by any bash executable.

% docker run --rm demo echo Hello World
Hello World

% docker run -it --rm demo /bin/bash
root@45eb8ff58d50:/src# ls -al
total 8
drwxr-xr-x 2 root root 4096 Oct 25 19:59 .
drwxr-xr-x 1 root root 4096 Nov  3 19:58 ..

Binding the host’s working directory

The command docker run -it --rm demo /bin/bash started a bash terminal into the container and then ls -al listed the contents of the app container directory. It’s empty and not useful: we need a way to make the app’s source code visible from inside the container, something that can be achieved by binding a volume (-v) mapping the host’s work directory ($(pwd)) to the container’s /src directory.

% docker run -it --rm -v "$(pwd):/src" demo /bin/bash
root@b711ed71a474:/src# ls -al
total 60
drwxr-xr-x 27 root root  864 Nov  3 14:27 .
drwxr-xr-x  1 root root 4096 Nov  3 20:00 ..
drwxr-xr-x 12 root root  384 Nov  2 18:34 .git
-rw-r--r--  1 root root  246 Nov  2 16:51 .gitattributes
-rw-r--r--  1 root root  722 Nov  2 16:51 .gitignore
-rw-r--r--  1 root root    6 Nov  2 16:51 .ruby-version
-rw-r--r--  1 root root  648 Nov  3 18:45 Dockerfile
-rw-r--r--  1 root root 1968 Nov  2 17:03 Gemfile
-rw-r--r--  1 root root 4928 Nov  2 17:03 Gemfile.lock
-rw-r--r--  1 root root   58 Nov  2 16:51 Procfile.dev
-rw-r--r--  1 root root  374 Nov  2 16:51 README.md
-rw-r--r--  1 root root  227 Nov  2 16:51 Rakefile
drwxr-xr-x 11 root root  352 Nov  2 16:51 app
drwxr-xr-x  8 root root  256 Nov  2 16:51 bin
drwxr-xr-x 16 root root  512 Nov  2 16:51 config
-rw-r--r--  1 root root  160 Nov  2 16:51 config.ru
drwxr-xr-x  4 root root  128 Nov  2 16:52 db
drwxr-xr-x  4 root root  128 Nov  2 16:51 lib
drwxr-xr-x  4 root root  128 Nov  2 16:51 log
drwxr-xr-x 20 root root  640 Nov  2 16:51 node_modules
-rw-r--r--  1 root root  251 Nov  2 17:37 package.json
drwxr-xr-x  9 root root  288 Nov  2 16:51 public
drwxr-xr-x  3 root root   96 Nov  2 16:51 storage
drwxr-xr-x 11 root root  352 Nov  2 17:05 test
drwxr-xr-x  9 root root  288 Nov  2 16:52 tmp
drwxr-xr-x  4 root root  128 Nov  2 16:51 vendor
-rw-r--r--  1 root root 4919 Nov  2 16:51 yarn.lock

Aha, it works! Refer to this guide to learn more about Bind Mounts.

Fetching the RubyGems

Now that we learned docker run can execute any command we pass into the command line, let’s try to install the project gems:

% docker run -it --rm -v "$(pwd):/src" demo bundle install
Fetching gem metadata from https://rubygems.org/
Fetching gem metadata from https://rubygems.org/............
Fetching https://github.com/fabiolnm/minitest-rails.git
Fetching rake 13.0.6
Installing rake 13.0.6
Fetching concurrent-ruby 1.1.9
Fetching minitest 5.14.4
Installing minitest 5.14.4
Fetching builder 3.2.4

It works! Well, kind of. If you run it twice, you’ll see all the gems are downloaded over and over again. It’s possible to fix it by mounting a Named Volume. Different from the Bind Mount, the Name Volume mounts a virtual volume into the docker Virtual Machine itself, not visible from the host directory. The second run now uses previously fetched gems instead of downloading them again:

% docker run -it --rm         \
  -v "$(pwd):/src"            \
  -v "gems:/usr/local/bundle" \
  demo bundle install

Fetching gem metadata from https://rubygems.org/
Fetching gem metadata from https://rubygems.org/............
Using rake 13.0.6
Using concurrent-ruby 1.1.9
Using builder 3.2.4

Now it’s already possible to start the rails server. Although Puma starts listening on the port 3000, it’s internal to the container’s own network. The flag -p 3000 binds the host network port to the internal container port, making the container’s port accessible to the host. Special attention to the rails command: if Puma isn’t started using -b, it won’t respond to the requests originated from the host.

% docker run --rm             \
  -v "$(pwd):/src"            \
  -v "gems:/usr/local/bundle" \
  -p 3000:3000                \
  demo rails s -b
=> Booting Puma
=> Rails 7.0.0.alpha2 application starting in development
=> Run `bin/rails server --help` for more startup options
Puma starting in single mode...
* Puma version: 5.5.2 (ruby 3.0.2-p107) ("Zawgyi")
*  Min threads: 5
*  Max threads: 5
*  Environment: development
*          PID: 1
* Listening on
Use Ctrl-C to stop

The app is responding from the container as demonstrated in the image below, and the error remembers we still need to configure a container to run the Postgres database.

Postgres not running

Configuring a Postgres container

The official images for Postgres can be found here. The container’s entry point initializes with a default postgres user and database, while the -e POSTGRES_HOST_AUTH_METHOD=trust flag set to truth dispenses password for authentication. It needs to be mounted into a named volume, so the database is not lost upon stop.

The host port 5432 is bound to the container’s port making it possible to use clients (psql, pgAdmin) on the host machine to query the database. The -d flag detaches docker to start in the background, allowing us to keep on the terminal to start a client connection to test the container is working:

docker run                              \
  -v databases:/var/lib/postgresql/data \
  -dp 5432:5432 postgres
% psql -h localhost -U postgres
psql (14.0)
Type "help" for help.


Connecting to the database container

Originally each container is started in isolated networks, but they can only communicate with one another if they join the same network. These commands restart the containers into a network named demo-network. The network-alias flag sets pg as the hostname of the database container. Then, it’s easy to test the app container can connect to the database using ping.

# Starts a Postgres container into the demo-network with alias "pg"
% docker run --rm                       \
  -v databases:/var/lib/postgresql/data \
  --network demo-network                \
  --network-alias pg                    \
   -dp 5432:5432 postgres

# Starts an app container into the demo-network
# Then uses ping to test connectivity with the "pg" host
% docker run -it --rm         \
  --network demo-network      \
  -v "$(pwd):/src"            \
  -v "gems:/usr/local/bundle" \
  demo /bin/bash -c "apt install iputils-ping -y && ping pg"

Setting up iputils-ping (3:20210202-1) ...
PING pg ( 56(84) bytes of data.
64 bytes from pg.demo-network ( icmp_seq=1 ttl=64 time=0.566 ms
64 bytes from pg.demo-network ( icmp_seq=2 ttl=64 time=0.138 ms
64 bytes from pg.demo-network ( icmp_seq=3 ttl=64 time=0.145 ms
--- pg ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2004ms

Configuring the Rails database settings

Finally, it’s necessary to edit the app’s config/database.yml file to access the database according to these environment variables.

default: &default
  adapter: postgresql
  encoding: unicode
  host:     <%= ENV['POSTGRES_HOST']      %>
  database: <%= ENV['POSTGRES_DB']        %>
  username: <%= ENV['POSTGRES_USER']      %>
  # Can be blank in dev hosts
  password: <%= ENV['POSTGRES_PASSWORD']  %>
  pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>

  <<: *default
  database: demo_dev

  <<: *default
  database: demo_test

Then run the app container in the same network, and pass the Postgres env vars:

% docker run --rm             \
  --network demo-network      \
  -v "$(pwd):/src"            \
  -v "gems:/usr/local/bundle" \
  -p 3000:3000                \
  -e POSTGRES_HOST=pg         \
  -e POSTGRES_USER=postgres   \
  demo /bin/bash -c "bundle && rails db:create db:migrate && rm tmp/pids/server.pid && rails s -b"
Dropped database 'demo_dev'
Dropped database 'demo_test'
Created database 'demo_dev'
Created database 'demo_test'
=> Booting Puma
=> Rails 7.0.0.alpha2 application starting in development
=> Run `bin/rails server --help` for more startup options

And now the app is running in 2 docker containers: one for the Web Server (Puma) and another for the database server (Postgres).

Running on Docker


While containerizing the app infra-structure brings many advantages - like automating the setup, and starting the design of the future deployment architecture - the docker commands introduces too many details to remember during the development stage. The Part 3 will introduce Docker Compose and move almost all the details to the docker-compose.yml containers configuration file.

About Fábio Miranda

Photo of Fábio Miranda I'm currently living in Vancouver, Beautiful British Columbia, Canada, where my family and I started a new journey in 2023. I'm a hard-worker, currently the CTO at 7GEN, helping to remove the biggest barriers to electrification for medium- and heavy-duty commercial fleets.
Computer Engineer since 2004, graduated from ITA - Instituto Tecnológico de Aeronáutica (Aeronautical Technological Institute). I've been working with software development since then in many different tech stacks (Java, Python, Ruby, Golang, NodeJS) and industries (Aviation, Consulting, Brazilian Payments processing startups, Schools, Fundraising, and Logistics).