Home + VPS Hybrid
A local cluster for development and private services, plus a remote VPS for the public-facing surface. Portoser drives both from the same registry.yml. The two networks join over WireGuard or Tailscale, and Portoser's built-in CA distributes mTLS across the link.
TL;DR
| Hardware | A home cluster (1+ machines) + at least one VPS (any provider with a public IP and SSH access) |
| Network bridge | WireGuard or Tailscale mesh; Cloudflare Tunnel as an alternative for the inbound side |
| TLS | mTLS between local and VPS via Portoser's built-in CA; public-facing TLS via Caddy on the VPS or Cloudflare |
| DNS | Internal *.internal resolved by your home dnsmasq; public *.example.com resolved by your registrar |
| Time to first hybrid deploy | ~2–3 hours including WireGuard config and CA distribution |
| Biggest gotcha | The VPS must reach your home network for SSH-based deploys; that's what the mesh VPN is for. Don't skip it and try to expose SSH to the internet. |
Why pick this shape
- Keep secrets, databases, and dev services at home. Expose only what needs to be public.
- One registry, one CLI, one web UI for both halves. Deployments to the VPS look identical to deployments at home.
- The VPS is a small, single-purpose machine; the home cluster carries the weight.
Network shape
[Internet]
│
├── Cloudflare Tunnel ──┐
│ ▼
└── DNS: api.example.com → VPS public IP
│
▼
┌──────────┐
│ VPS │ Caddy on :80/:443, public-facing
│ (Linux) │ Connected to home over WireGuard
└─────┬────┘
│ WireGuard mesh / Tailscale
│
┌─────────────┴─────────────┐
▼ ▼
┌────────────┐ ┌────────────┐
│ mini1 │ │ pi3 │
│ (control) │ │ (services) │
└────────────┘ └────────────┘
Home network
Registry shape
The VPS is just another host. The only thing different about it is the IP — and the routing rules in Caddy that mark it as the public ingress.
domain: internal
hosts:
mini1:
ip: 192.168.1.10 # LAN IP
arch: arm64-apple
ssh_user: mini1
roles: [infrastructure, vault, databases]
pi1:
ip: 192.168.1.51
arch: arm64-linux
ssh_user: pi
roles: [internal_services]
vps:
ip: 10.10.0.2 # WireGuard mesh IP, NOT the public IP
arch: amd64-linux
ssh_user: admin
roles: [public_ingress]
services:
api-public:
hostname: api.example.com # public hostname, NOT *.internal
current_host: vps
deployment_type: docker
docker_compose: /api/docker-compose.yml
port: 8443
healthcheck_url: https://api.example.com/health
api-backend:
hostname: api-backend.internal
current_host: mini1
deployment_type: docker
docker_compose: /api_backend/docker-compose.yml
port: 9000
# The public api-public service forwards to this internal one
# over WireGuard. mTLS protects the link.
caddy:
host: mini1 # internal Caddy on the LAN
ingress_host: mini1
config_path: /opt/homebrew/etc/Caddyfile
use_admin_api: true
A second Caddy instance on the VPS handles the public side. You can run it as a Portoser-managed service too:
services:
caddy-public:
hostname: ingress.example.com
current_host: vps
deployment_type: native
service_file: /caddy/service.yml
port: 443
Setting up the WireGuard mesh
Tailscale is the lower-effort choice; WireGuard with manually managed keys is the lower-dependency one. Either way, the home cluster and the VPS need to be able to reach each other on a private subnet (e.g. 10.10.0.0/24).
# Tailscale (recommended for getting going)
ssh vps 'curl -fsSL https://tailscale.com/install.sh | sh && sudo tailscale up'
# Repeat on every home host you want to expose to the VPS:
sudo tailscale up
# Verify:
ssh vps 'tailscale status'
After this, your VPS has a 100.x.x.x Tailscale IP for every home host. Use those IPs in the registry, not the LAN IPs — the VPS can't reach 192.168.1.10 directly.
Distributing mTLS to the VPS
Portoser ships an install_ca_on_hosts.sh helper at the repo root. Run it once with the VPS in your registry and the CA cert lands in the right place on the VPS so subsequent service-to-service mTLS works:
./portoser certs init-ca # idempotent; creates ~/.portoser/ca/ if missing
./install_ca_on_hosts.sh # walks registry, scp's CA cert, updates trust store
./portoser certs generate-all-servers
./portoser certs deploy-servers # places per-service certs on the right hosts
For services that talk across the mesh (api-public → api-backend), generate per-service mTLS certs with ./portoser certs generate <service> and reference them in the service's tls_cert / tls_key / ca_cert fields.
Public TLS (the part Cloudflare or Caddy handles)
You have two reasonable options on the VPS:
Option A: Caddy auto-TLS via Let's Encrypt
Run Caddy on vps, point your DNS at the VPS's public IP, and let Caddy's automatic HTTPS handle certificate issuance and renewal. Portoser manages the Caddyfile via ./portoser caddy sync.
Option B: Cloudflare Tunnel
Don't expose ports on the VPS at all. Cloudflare Tunnel runs as a daemon on the VPS, terminates TLS at Cloudflare's edge, and proxies plaintext into your services. The VPS stops being a public host — it's a Cloudflare Tunnel client.
# /etc/cloudflared/config.yml on the VPS
ingress:
- hostname: api.example.com
service: http://localhost:8080 # your local Caddy or service
- service: http_status:404
Portoser can manage this cloudflared daemon as a native service if you want the VPS's full setup tracked.
Bringing it up
# 1. Mesh VPN is up and you can reach the VPS from a home host
ssh vps 'echo connected'
# 2. Validate registry
./portoser registry validate
# 3. Distribute CA + certs
./portoser certs init-ca
./install_ca_on_hosts.sh
./portoser certs generate-all-servers
./portoser certs deploy-servers
# 4. Deploy the home side
./portoser deploy mini1 vault api-backend
./portoser deploy pi1 internal-worker
# 5. Deploy the VPS side
./portoser deploy vps caddy-public api-public
# 6. Verify the public hostname resolves and the round-trip works
curl -v https://api.example.com/health
What the web UI shows you
The dependency graph (ReactFlow) renders cross-host edges. api-public on vps → api-backend on mini1 shows up as a single edge spanning two host columns. The deployment panel works for VPS services exactly the same as for home services — drag-and-drop moves a service from mini1 to vps, the panel stages it, you click Deploy and the orchestrator does the right thing on both ends.
Common gotchas
- VPS IP confusion. The registry's
ip:for the VPS must be the mesh IP (Tailscale100.x.x.xor your WireGuard10.10.0.x), not the public IP. Public DNS still points at the public IP — that's a different concern. - Outbound firewall on the VPS. Some hosting providers default-deny outbound to certain ports. SSH from VPS to home hosts is fine over the mesh; SSH from home to VPS uses the VPS's public IP. Test both directions.
- Cert renewal. mTLS certs expire.
./portoser certs listshows expiry;./portoser certs generate <service>regenerates. If you forget,https://api.example.comwill start failing health when the cert expires. - DNS split-horizon. A service named
api-backend.internalresolves on the home network. The VPS only sees it because it's on the mesh and dnsmasq is reachable over the mesh. Confirm withssh vps 'getent hosts api-backend.internal'.
Where this shape falls down
- The mesh VPN becomes load-bearing. If Tailscale or WireGuard is down, deployments to the VPS stop working. (Public traffic keeps working as long as the public Caddy or Cloudflare Tunnel doesn't depend on the home services for that request.)
- Latency from the VPS to home services is whatever your home upload speed is. For low-latency public APIs that depend on home-only services, you'll want to move the dependent service to the VPS too.
- This is not a high-availability setup. It's a "private dev cluster + public face" setup. If your home internet drops, anything the public side proxies to home stops working.
Next
- Mixed Architecture Cluster — the cross-arch wiring you'll likely have on the home side
- Vault Integration — Vault stays on the home side; secrets get injected into VPS services over the mesh
- Operations: Health Monitoring — what flaps when the mesh VPN dies