Raspberry Pi Cluster
A 3–6 Pi setup with role-based service allocation. The shape Portoser was originally built for: a fanless, low-power home lab that runs real services, with one beefier machine acting as the image-builder and (optionally) the control plane.
TL;DR
| Hardware | 3–6 Raspberry Pis (Pi 4 or Pi 5), arm64-linux |
| Builder host | One x86 or Apple Silicon machine for docker buildx (Pis can build but it's painful) |
| Network | Wired Ethernet to a switch; static IPs or DHCP reservations |
| Storage | Quality A2 SD cards or USB SSDs — SD card I/O is the most common cause of flapping health checks |
| Time to first multi-Pi deploy | ~30–45 minutes if SSH is already keyed up |
| Biggest gotcha | Pis run sequential workloads well; throwing concurrent batch jobs at one Pi will spike load average and trip health checks |
Why pick this shape
- Cheap 24/7 compute. Five Pi 4s draw less power than one desktop.
- The example registry that ships with the repo (
registry.example.yml) is built around exactly this layout — you can read it as a working template. - Sequential workloads (knowledge graphs, ingestion pipelines, batch processing) idle 85–98% of the time, which is exactly what Pis are good at.
Role-based allocation
Don't think "spread services evenly." Think roles:
| Pi | Suggested role | Examples |
|---|---|---|
| pi-1 | Infrastructure | DNS (dnsmasq), Caddy, Vault, the Portoser web UI itself |
| pi-2 | Stateful | Postgres, Neo4j, Redis — bind these to one Pi with the SSD |
| pi-3 | Workflow | n8n, queues, scheduler |
| pi-4 | App services | Your own APIs and workers |
| pi-5..6 | Burst / experimental | Whatever you're testing this week |
The full registry-with-roles example lives in registry.example.yml at the repo root. Copy it, replace IPs and ssh_user, and you have a starting point.
Registry skeleton
domain: internal
dns:
host: pi1
ingress_ip: 192.168.1.51
config_path: /etc/dnsmasq.conf
hosts:
pi1:
ip: 192.168.1.51
arch: arm64-linux
ssh_user: pi
path: /home/pi/services
roles: [infrastructure, vault]
pi2:
ip: 192.168.1.52
arch: arm64-linux
ssh_user: pi
path: /home/pi/services
roles: [databases]
pi3:
ip: 192.168.1.53
arch: arm64-linux
ssh_user: pi
path: /home/pi/services
roles: [workflows]
pi4:
ip: 192.168.1.54
arch: arm64-linux
ssh_user: pi
path: /home/pi/services
roles: [app_services]
caddy:
host: pi1
ingress_host: pi1
config_path: /etc/caddy/Caddyfile
admin_api: http://127.0.0.1:2019
use_admin_api: true
Image builds — use a builder host
Building Docker images on a Pi works in theory and rarely in practice for anything heavier than nginx:alpine. Set up docker buildx on a beefier machine and let it cross-build for linux/arm64:
# On the builder host (laptop, x86, or Apple Silicon)
./portoser cluster setup-buildx
./portoser cluster build --all
# Builds for all Pi-targeted services in parallel (default batch size 4)
./portoser cluster deploy --all
The wiring lives in lib/cluster/buildx.sh, build.sh, and deploy.sh. cluster setup-buildx creates a buildx builder named after your registry; cluster build produces multi-arch images and pushes them to whatever registry you've configured. cluster deploy SSHs into each Pi and runs docker compose up -d with the new image tag.
Watching it run
# Live cluster health, refreshing
./portoser cluster health --watch
# Per-Pi container status
./portoser cluster docker-health --verbose
# What's actually deployed where
./portoser cluster status --json | jq .
The web UI's cluster view shows the same data with drag-and-drop service moves between Pis. Drops stage in a deployment panel — they don't take effect until you click Deploy.
Common gotchas
- SD card I/O kills health checks. A Pi running Postgres on a stock SD card will look fine for an hour and then start failing health under load. Move the data dir to a USB 3 SSD, or put the database Pi on an SSD-only setup.
bashversions on Pi OS. Modern Raspberry Pi OS ships Bash 5; the CLI works. Older images may not —bash --versionto check.- DNS resolution from the Pi back to itself. dnsmasq on
pi1resolves*.internalfor the cluster, butpi1itself needs127.0.0.1(or its own LAN IP) listed first in/etc/resolv.conforsystemd-resolved. Thelib/dns.shsetup helper covers this. docker compose uphangs on first deploy. It's pulling the base image. Watch withssh pi-N "docker compose -f /path/to/compose logs -f".- mDNS clashes. Two Pis advertising the same hostname over
.localwill both flap. Pick.internal(Portoser's defaultdomain) and use dnsmasq.
Where this shape falls down
- Heavy build jobs and ML workloads need a bigger machine somewhere. The Pi cluster is fine for the API tier, the workflow tier, and small databases — not for training.
- A 4-Pi cluster idles around 8–12 W total but a single GPU host pulls more than that. If you need GPU, see the GPU + CPU Split shape.
- One Pi acting as DNS + Caddy + control plane is a single point of failure. For higher uptime, mirror the infrastructure role to a second host or move it to a Mac mini in a Mixed Architecture layout.
Next
- Mixed Architecture Cluster when you add a Mac mini to the mix
- Operations: Health Monitoring for what
cluster healthactually checks - CLI Commands — full reference for
portoser cluster *