Quick Start
This walkthrough takes you from a fresh checkout to a deploy with the self-healing loop running. The orchestrator is bash — there is no daemon to install, no agent on workers, and no signup. Adding a machine or a service means editing one YAML file (registry.yml).
Time required:
- Single-machine demo: ~5 minutes (only Docker required).
- Real multi-host setup: ~30 minutes once SSH keys to your hosts are in place.
Prerequisites
- Bash 5+ on the control host (the box running the
portoserscript). macOS ships 3.2 — install viabrew install bash. yq(Mike Farah's Go-based v4),jq,ssh,scp,curl. On macOS:brew install yq jq.- Docker + Docker Compose for the local demo or for any host that runs
dockerservices. - Passwordless SSH (key auth) to every host you list in
registry.yml. The CLI never prompts for a password mid-deploy.
Path A — single-machine demo (5 min)
The fastest way to see the orchestrator end-to-end. Brings up Caddy, a small FastAPI dashboard, and one nginx-backed dummy service via docker-compose.yml at the repo root.
git clone https://github.com/nonagenticai/portoser.git
cd portoser
cp .env.example .env
docker compose up
# Open http://localhost:8080
There is no Postgres, Vault, or Keycloak in this path — it's a self-contained loop validation. The full web UI lives under Path B.
Path B — first real deploy (~30 min)
This walks through what really happens when you point Portoser at one or more hosts: edit the registry, validate, deploy, watch.
1. Edit the registry
registry.yml is the single source of truth. Hosts and services live in two top-level maps. Copy the example and start small:
cp registry.example.yml registry.yml
$EDITOR registry.yml
A minimal hosts entry:
domain: home.local
hosts:
m1:
ip: 192.168.0.10
ssh_user: admin
ssh_port: 22
arch: arm64-apple # or amd64-linux, arm64-linux, etc.
path: /srv/services # where service trees live on this host
roles: [dev]
services:
my-api:
hostname: m1
current_host: m1
deployment_type: docker # or "local" (uv-managed Python) or "native"
docker_compose: /srv/services/my-api/docker-compose.yml
port: 8000
healthcheck_url: http://m1:8000/health
description: "Toy FastAPI service"
dependencies: []
There is no portoser machine add or portoser service add subcommand — the registry is the input, you edit it directly.
2. Validate before deploying
./portoser registry validate # YAML + required fields
./portoser dependencies check # circular deps, missing services
./portoser remote test-connections # SSH reachability for every host
If any of these fail, fix the registry or your SSH config before moving on.
3. Deploy
./portoser deploy m1 my-api
What happens:
- Observe — pre-flight checks on the target (port free, disk space, dependencies up, Docker running).
- Diagnose — if any check fails, the analyzer fingerprints the failure (e.g.
PROBLEM_PORT_CONFLICT). - Solve — if a known playbook matches the fingerprint, the matching action runs automatically (auto-heal is the default).
- Learn — the outcome is appended to
~/.portoser/knowledge/with a frequency count and a generated playbook if this is a new fingerprint.
Useful flags:
--dry-run— print the plan, don't execute.--no-auto-heal— observe and diagnose, but don't apply playbooks. Use this when you want to read what would have happened.--json-output— machine-readable output (used by the web UI).
4. Verify
./portoser status # everything in the registry
./portoser health check my-api # specific service
./portoser health check-all # all services with scores
./portoser dependencies info my-api # deps + dependents
./portoser dependencies impact my-api # blast radius if my-api goes down
Day-2 commands
| Goal | Command |
|---|---|
| Stop a service | ./portoser stop my-api |
| Stop everything on a host | ./portoser stop m1 |
| Tail local Python service logs | ./portoser local logs my-api 100 |
| Move a service between hosts | ./portoser move my-api m1 m2 |
| Diagnose a stuck service | ./portoser diagnose my-api m1 |
| Show recent deploys | ./portoser history list |
| Roll back to a prior deploy | ./portoser history rollback <DEPLOYMENT_ID> |
| Watch cluster health | ./portoser cluster health --watch |
When something fails
Run the diagnose command on the offending service and machine. It walks the same observation pipeline a deploy uses, but in read-only mode:
./portoser diagnose my-api m1
./portoser diagnose my-api m1 --json-output # for the web UI / scripting
If you see PROBLEM_UNKNOWN, the analyzer didn't recognize the failure pattern. The diagnostic report (saved under ~/.portoser/diagnostics/) is what you'd attach to a bug report or paste into a follow-up learn playbook entry.
What's next
- First Deployment Tutorial — full walkthrough with screenshots.
- Intelligent Deployment — how the analyzer / solver / learning loop actually works.
- Service Registry — every field in
registry.yml, with examples. - Web UI — drag-and-drop deploys, live metrics, knowledge base browsing.
- CLI Reference — every subcommand and flag.