Kamal Deployment
Rails official deployment tool โ Docker-based zero-downtime deploys
Kamal 2 is the official deployment tool for Rails 8, providing Docker container-based zero-downtime deployments. Deploy to any VPS (DigitalOcean, Hetzner, AWS EC2, etc.) without PaaS like Heroku.
What is Kamal?
Kamal (formerly MRSK) is a tool built by 37signals to deploy Basecamp and HEY. Starting with Rails 8, rails new includes Kamal configuration by default.
Key features:
Docker container-based deployment
Zero downtime: traffic switches only after new container passes health check
kamal-proxy: lightweight reverse proxy replacing Traefik (from Kamal 2)
Automatic SSL via Let's Encrypt
Rollback: restore previous version with single
kamal rollbackMulti-server deployment support
deploy.yml Configuration
# config/deploy.yml
service: my-app
image: my-dockerhub-user/my-app
servers:
web:
hosts:
- 123.456.789.10
options:
network: "private"
proxy:
ssl: true
host: my-app.com
registry:
username: my-dockerhub-user
password:
- KAMAL_REGISTRY_PASSWORD
builder:
local:
arch: amd64
host: unix:///var/run/docker.sock
Deployment Flow
kamal deploy
โ
1. Build Docker image locally
โ
2. Push image to Docker Hub (or GHCR)
โ
3. Server pulls image
โ
4. Start new container + health check
โ
5. kamal-proxy switches traffic to new container
โ
6. Remove old container
โ ๏ธ DigitalOcean + Kamal: Local Build Recommended
Building Docker images directly on small DigitalOcean Droplets (1-2GB RAM) causes extreme CPU/memory load. Rails Docker builds consume significant resources with gem compilation and asset precompilation.
Recommended pattern: Local build โ Registry push โ Server pull
builder:
local:
arch: amd64
host: unix:///var/run/docker.sock
The builder.local setting builds images on your local machine, pushes to Docker Hub or GHCR, and the server only pulls the finished image.
Why server builds are problematic:
bundle install: compiling dozens of gems โ CPU 100% + 1GB+ memoryrails assets:precompile: Tailwind CSS + Vite build โ additional memory1GB Droplet: OOM Killer terminates processes, affecting running app
2GB Droplet: response times 10x slower during build
๐ก Real-world experience: Solid Queue memory issue
Solid Queue spawns 3 workers by default. Each worker is an independent process consuming ~178MB, meaning Solid Queue uses more memory than Rails itself.
Process Memory
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
ruby (Rails/Puma) 357MB
bundle (SolidQueue worker x3) 178MB x 3 = 534MB
supervisord 18MB
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Total ~910MB
OOM on a 1GB server is inevitable. Even on a 2GB Droplet, considering OS + Docker overhead, normal operation itself is near the memory limit. OOM issues were especially prominent during deploys, when the new container starts while the old one drains.
Note that even with processes: 1, threads: 1 in production config, 3 bundle processes still spawn. Reducing worker count can save memory, but small VPS remains fundamentally tight.
Memory optimization options:
1. Disable YJIT (RUBY_YJIT_ENABLE=0) โ saves 30-50MB per process (slight performance trade-off)
2. MALLOC_ARENA_MAX=2 โ reduces Ruby memory fragmentation (saves 50-100MB per process, very effective)
3. Switch to Sidekiq โ thread-based, runs as single process, significant memory savings. But requires Redis
4. Separate Solid Queue to another machine โ not possible with SQLite (file sharing issue)
The author chose the simplest approach โ environment variables:
MALLOC_ARENA_MAX = "2" # Prevent Ruby memory fragmentation
RUBY_YJIT_ENABLE = "0" # Disable YJIT to save memory
This alone saves ~80-150MB per process, but was not a fundamental solution.
The author ultimately couldn't resolve the instability on 2GB Droplets and migrated to Fly.io. Upgrading to a larger Droplet (4GB+) might solve it, but Fly.io was a better cost-performance choice.
Local build advantages:
Leverages your development machine's abundant CPU/RAM (fast M1/M2 Mac builds)
Server only pulls + starts container โ virtually no load
Deploy without affecting the running application
Key Commands
kamal setup # Initial server setup (Docker, kamal-proxy)
kamal deploy # Build โ Push โ Deploy
kamal redeploy # Redeploy existing image (config changes)
kamal rollback # Rollback to previous version
kamal app logs # View app logs
kamal app exec 'bin/rails console' # Remote Rails console
kamal app exec 'bin/rails db:migrate' # Run migrations
โ ๏ธ Vite autoBuild: Must Be false in Production
When deploying a Rails app with Vite, never set autoBuild: true in production in config/vite.json. With autoBuild: true, vite build runs on every user request, causing extreme CPU/memory load.
The author experienced server resource exhaustion on every access due to this misconfiguration.
// config/vite.json
{
"development": { "autoBuild": true }, // โ
only true in dev
"test": { "autoBuild": false },
"production": { "autoBuild": false } // โ ๏ธ must be false!
}
In production, run vite build during deploy (via RUN bin/rails assets:precompile in Dockerfile) and only serve pre-built assets at runtime.
Kamal vs Other Deployment Methods
| Item | Kamal + VPS | Heroku | Fly.io | Capistrano |
|---|---|---|---|---|
| Monthly cost | $5-12 (VPS) | $25+ (Eco) | $5+ | VPS cost only |
| Docker required | Yes | No (Buildpack) | Yes | No (direct server) |
| Zero downtime | Yes (kamal-proxy) | Yes (paid) | Yes | Partial (config needed) |
| Auto SSL | Yes (Let's Encrypt) | Yes | Yes | Manual |
| SQLite support | Yes (Volume) | No | Yes (Litestream) | Yes |
| Server control | Full | Limited | Limited | Full |
| Learning curve | Medium | Low | Medium | High |
SQLite + Kamal
To deploy SQLite-based apps with Kamal, use Docker Volumes to store database files outside the container.
servers:
web:
volumes:
- data:/rails/storage # Persist SQLite DB files
Note: SQLite works on single servers only. For multi-server deployments, use a client-server DB like PostgreSQL.
Architecture Diagram
Key Points
gem install kamal โ install Kamal CLI
kamal init โ generate config/deploy.yml
Configure server IP, Docker registry, env vars in deploy.yml
Set builder.local โ local build + registry push (required for small servers)
kamal setup โ initial server setup (Docker, kamal-proxy install)
kamal deploy โ build โ push โ zero-downtime deploy
Pros
- ✓ Rails 8 official โ ready to use with rails new
- ✓ Production for $5-12/mo (cheaper than PaaS)
- ✓ Zero-downtime deployment out of the box
- ✓ Automatic SSL (Let's Encrypt)
- ✓ Full server control โ no vendor lock-in
- ✓ Instant rollback with kamal rollback
- ✓ SQLite supported โ Render/Heroku lack persistent volumes so SQLite is impossible
- ✓ VPS providers like DigitalOcean are generally cheaper than traditional hosts (e.g. onamae.com VPS), with richer third-party integrations and cleaner UI
Cons
- ✗ Must manage server yourself (security patches, monitoring)
- ✗ Basic Docker knowledge required
- ✗ Initial setup more complex than PaaS
- ✗ Server build load issues on small servers (solved with local build)
- ✗ Requires SSH-accessible server (no serverless)
- ✗ VPS operational overhead โ Docker/app logs accumulate filling disk to 100%, requiring periodic log rotation, disk/resource monitoring โ tasks PaaS handles automatically
- ✗ Docker image registry management required โ Render/Fly.io auto-build from code push, but Kamal requires storing images in Docker Hub etc. Private repos may incur extra costs, and old images accumulate requiring storage management
- ✗ SQLite backups must be self-implemented โ not a Kamal issue per se, but a SQLite + VPS concern. Managed DBs (RDS etc.) offer automatic backups, but SQLite files require building your own backup pipeline with cron + Litestream etc.
- ✗ Frequent deploy troubles โ even the author with infrastructure experience encountered issues on nearly every deploy. Without infra knowledge, troubleshooting can be very difficult. Render/Fly.io complete deploys with just git push, while Kamal can fail at Docker, SSH, network, or proxy layers, making the debugging surface much wider