Caddy Integration

Caddy is Portoser's reverse proxy. The registry is the source of truth for what's routed where. ./portoser caddy sync translates the registry into a Caddyfile and reloads Caddy without dropping connections.

The integration lives in lib/caddy.sh, lib/caddy_integration.sh, and lib/caddyfile_generator.sh.

Why Caddy

  • Automatic HTTPS for any hostname Caddy can reach over ACME — useful for VPS setups.
  • Live config reload via Caddy's admin API. No restart-and-pray.
  • A Caddyfile is small and human-readable. You can debug it with cat.

Registry → Caddyfile

Each service in the registry that has a hostname: becomes a route in the Caddyfile. The fields that matter:

services:
  api:
    hostname: api.internal              # the route Caddy will match
    current_host: mini1                  # which host Caddy proxies to
    port: 8400                           # which port on that host
    healthcheck_url: http://api.internal/health   # optional — used for upstream health
    tls_cert: /api/certs/api-cert.pem    # optional — for mTLS upstream
    tls_key: /api/certs/api-key.pem
    ca_cert: /api/certs/ca-cert.pem

./portoser caddy regenerate writes the Caddyfile to caddy.config_path (declared in the registry's top-level caddy: block). ./portoser caddy reload tells Caddy to pick it up. ./portoser caddy sync does both.

Run ./portoser caddy validate before reloading to catch syntax errors. The validator is caddy validate --config <path> — same one Caddy itself uses.

Where Caddy runs

The registry's caddy.host field names the host Caddy lives on. In a small cluster, that's usually the same machine as DNS (dnsmasq) — both are infrastructure, both want to be the most-uptime host you have.

caddy:
  host: mini1
  ingress_host: mini1
  config_path: /opt/homebrew/etc/Caddyfile      # macOS via Homebrew
  # config_path: /etc/caddy/Caddyfile            # Linux distros
  admin_api: http://127.0.0.1:2019
  use_admin_api: true

use_admin_api: true is the recommended path. Reloads happen via Caddy's API on :2019 and don't drop connections. Set it to false only if you have a reason — typically running Caddy in an environment where the admin API is firewalled off.

What caddy sync actually does

  1. Read the registry, walk every service with a hostname:.
  2. Generate a Caddyfile block per service with reverse proxy to current_host:port.
  3. If the service declares tls_cert/tls_key/ca_cert, generate a tls directive.
  4. Write the new Caddyfile to a temp path, validate it.
  5. POST to Caddy's admin API to load the new config atomically.
  6. If the load fails, leave the previous config running and surface the validation error.

Implementation: lib/caddyfile_generator.sh:generate_caddyfile does steps 1–4; lib/caddy.sh:caddy_reload handles 5–6.

Per-service updates

You don't have to regenerate the whole Caddyfile to update one service:

./portoser caddy update api          # regenerate just api's block, hot-reload
./portoser caddy proxy api           # test that Caddy can reach api's upstream

Useful when you're iterating on a single service and don't want to re-validate every other route on every reload.

TLS

Caddy handles TLS in two distinct directions:

Public TLS (Let's Encrypt / ZeroSSL)

If hostname: is a real domain pointed at the Caddy host's public IP, Caddy will auto-provision a Let's Encrypt cert. No additional registry config needed. ACME challenge happens on :80 / :443; if those ports aren't reachable from the internet, fall back to DNS-01 (configure via Caddyfile snippets in your config_path template).

Upstream mTLS (Caddy → service)

When a service inside your cluster requires mTLS, Caddy presents a client certificate to it. The tls_cert/tls_key/ca_cert keys on the service entry tell the generator to emit the right Caddy directives. The certs themselves come from ./portoser certs generate — see Certificates & mTLS.

Common operations

./portoser caddy sync          # full regenerate + reload
./portoser caddy regenerate    # write Caddyfile, do not reload
./portoser caddy reload        # reload Caddy with current Caddyfile
./portoser caddy validate      # syntax-check Caddyfile
./portoser caddy update <svc>  # update one service's block
./portoser caddy proxy <svc>   # test upstream connectivity

Common gotchas

  • config_path mismatch. macOS Homebrew Caddy uses /opt/homebrew/etc/Caddyfile; Linux distros vary. If caddy validate succeeds but reload does nothing, the Caddyfile got written somewhere Caddy isn't reading.
  • Admin API not on 127.0.0.1:2019. Caddy can be configured to listen on a different admin address. Check the actual running Caddy with caddy environ | grep admin and update the registry's admin_api: to match.
  • Stale routes. If you remove a service from the registry, run ./portoser caddy sync — the regenerator walks the registry and emits routes only for services that still exist. Manually edited Caddyfiles will get overwritten.
  • TLS termination loop. Don't put a service's port: as 443 if Caddy is also handling TLS for it. Caddy listens on :443, terminates, and proxies plaintext (or upstream-mTLS) to the service's actual port.

Where Caddy fits and where it doesn't

Caddy is the cluster's edge router. It's good at HTTP routing, automatic TLS, and graceful reloads. It's not a service mesh — service-to-service traffic that doesn't go through Caddy doesn't get its TLS or routing rules. For internal-only mTLS without Caddy in the path, services authenticate to each other directly using certs from ./portoser certs generate.

Next

  • Certificates & mTLS — how the certs Caddy presents and verifies are issued and rotated
  • Service Registry — full schema for hostname, port, tls_* fields
  • VPS + Home Hybrid — where Caddy on the VPS handles public TLS and Caddy at home handles internal routing