Mac mini Lab
A 2–4 Mac mini setup. Mostly Apple Silicon, sometimes one Intel mini still in the rack. Best for small studios that already invested in the Apple ecosystem and want first-class launchd deployments alongside Docker.
TL;DR
| Hardware | 2–4 Mac minis, M-series preferred (M1/M2/M3/M4); one Intel mini works as a buildx host or amd64-only workload runner |
| OS | macOS 13+ (Ventura or later) on every node |
| Bash | 5.x required — Apple's Bash 3.2 will not run the CLI. brew install bash on every host. |
| Network | Wired Ethernet via a switch; ideally 2.5GbE if you have it |
| Storage | Internal SSDs are excellent. External Thunderbolt SSDs work too. |
| Time to first multi-host deploy | ~30 minutes with SSH keys already in place |
| Biggest gotcha | macOS firewall and "Allow incoming connections" prompts will block listeners until you handle them |
Why pick this shape
launchdis a real first-class deployment target. Not every orchestrator can claim that.- Apple Silicon is fast per watt and silent. Four M-series minis will outperform a 12-Pi cluster on most workloads.
- macOS networking is consistent across all nodes — no
apt vs dnf vs apkconfusion. - MLX inference workloads run natively on Apple GPUs. If you're doing local LLM serving, this is the shape.
Registry skeleton
domain: internal
dns:
host: mini1
ingress_ip: 192.168.1.10
config_path: /opt/homebrew/etc/dnsmasq.conf
resolver_path: /etc/resolver/internal
hosts:
mini1:
ip: 192.168.1.10
arch: arm64-apple
ssh_user: mini1
path: /Users/mini1/services
roles: [infrastructure, vault, mlx_backends]
mini2:
ip: 192.168.1.11
arch: arm64-apple
ssh_user: mini2
path: /Users/mini2/services
roles: [databases, mlx_inference]
mini3:
ip: 192.168.1.12
arch: arm64-apple
ssh_user: mini3
path: /Users/mini3/services
roles: [workflows, app_services]
caddy:
host: mini1
ingress_host: mini1
config_path: /opt/homebrew/etc/Caddyfile
admin_api: http://127.0.0.1:2019
use_admin_api: true
Three flavors of deployment, all native to macOS
docker — Docker Desktop
Docker Desktop on Apple Silicon is solid. Caveats: it eats RAM (~2 GB just sitting there), and per-mini Docker installs each have their own image cache. If you're doing repetitive builds, set up a single buildx host and push to a shared registry instead of building per-mini.
local — Python via uv, run as a backgrounded process
services:
embedding-api:
hostname: embedding.internal
current_host: mini2
deployment_type: local
service_file: /embedding_api/service.yml
port: 9200
lib/local.sh installs deps via uv sync, starts the entrypoint, writes a PID file under ~/.portoser/run/, and tails logs to ~/.portoser/logs/<service>.log. macOS doesn't need any of the systemd plumbing the Linux path uses — it just shells out and watches the PID.
native — launchd plists
services:
postgres-prod:
hostname: postgres.internal
current_host: mini2
deployment_type: native
service_file: /postgres/service.yml
port: 5432
The service's own service.yml declares its launchd label and plist path (or where to install one). lib/native.sh calls launchctl bootstrap / bootout and reads status from launchctl list. lib/platform/detector.sh figures out you're on Darwin and routes there; the same registry entry on a Linux host would dispatch through systemctl.
Bringing it up
# Set up SSH keys to all minis (key-only auth strongly recommended)
ssh-copy-id mini1@192.168.1.10
ssh-copy-id mini2@192.168.1.11
ssh-copy-id mini3@192.168.1.12
# Install dnsmasq + Caddy on the infrastructure host
ssh mini1 'brew install dnsmasq caddy bash'
# Validate and deploy
./portoser registry validate
./portoser caddy sync
./portoser deploy mini1 caddy dnsmasq vault
./portoser deploy mini2 postgres-prod
./portoser health check-all
macOS-specific things to handle
The "Allow incoming connections" dialog
The first time anything binds a listening socket, macOS pops a dialog asking whether to allow incoming connections. Over SSH, you don't see the dialog — you see the connection failing. Two options:
- Pre-approve via
socketfilterfw:sudo /usr/libexec/ApplicationFirewall/socketfilterfw \ --add /Users/mini2/services/postgres/bin/postgres sudo /usr/libexec/ApplicationFirewall/socketfilterfw \ --unblockapp /Users/mini2/services/postgres/bin/postgres - Or disable the firewall entirely on lab-only minis. Not recommended for anything internet-facing.
Resolver config for .internal
Portoser's DNS layer drops a file into /etc/resolver/internal so macOS knows to forward *.internal lookups to your dnsmasq host. This requires sudo access on each mini once. The dns.host and dns.resolver_path keys in the registry handle it; running ./portoser dns config shows you what got applied.
Power & sleep
System Settings → Battery / Energy → "Prevent automatic sleeping when display is off." A sleeping mini is a dead cluster member. Same setting under "Wake for network access" so the mini answers SSH after a reboot.
Bash 3.2 keeps coming back
macOS updates can re-prefer /bin/bash even after you brew install bash. Always run the CLI as ./portoser from the repo (where the shebang resolves through /usr/bin/env bash) and ensure Homebrew's Bash is first on PATH for your shell.
Watching it run
./portoser health check-all
./portoser metrics # CPU/RAM/disk via lib/metrics/collector.sh
./portoser uptime # uptime windows from lib/sustain/
./portoser dependencies graph # service dependency graph as JSON
The web UI gives you the same data with drag-and-drop. Move a service from mini2 to mini3, deploy panel collects the change, click Deploy when you're done.
Where this shape falls down
- macOS containers run inside a Linux VM (Docker Desktop or OrbStack). For most workloads, fine — for high-throughput networking or GPU passthrough, less so.
- Apple Silicon images are arm64-only. If a vendor only ships amd64 containers, you'll need either Rosetta 2 emulation (slow) or one Intel mini in the cluster (see Mixed Architecture).
- Replacing macOS with Linux on a mini is possible but you're trading the Apple Silicon story for a normal x86-style setup. If you want Linux, get Pis or a NUC instead.
Next
- Mixed Architecture Cluster when you add Pis or x86 machines
- GPU + CPU Split when one mini becomes the MLX inference host
- Vault Integration — Vault running on
mini1is the recommended setup