Why Portoser
Portoser is a declarative service orchestrator for clusters of 2 to 20 machines — often mixed-architecture (Apple Silicon, Intel, Raspberry Pi) and mixed-OS (macOS and Linux). One registry, SSH-based deploys, no agents on workers, and no control plane to maintain.
What Portoser is
A single declarative registry (registry.yml) that defines:
- The machines in your cluster
- The services running on them
- Health checks, dependencies, environment, and secrets
…driven by a Bash + Python toolchain that:
- Deploys via SSH (no worker agents)
- Supports three deployment types from one registry — Docker Compose, local Python (uv-managed), and native systemd / launchd
- Runs a self-healing loop (Observe → Diagnose → Solve → Standardize) on every deploy
- Exposes a web UI for drag-and-drop service moves, health, metrics, history, Vault, and certificates
- Includes an MCP server so AI assistants can register tools that operate on the cluster
What Portoser is not
- Not Kubernetes. No CRDs, no operators, no etcd, no overlay network.
- Not a PaaS. It does not build your code or run autoscalers.
- Not for cloud-scale. It is built and tested for clusters of roughly 2–20 hosts.
- Not a hosted product. You run it on your own machines.
Compared to other tools
| Tool | Where Portoser differs |
|---|---|
| Kubernetes (K3s / k0s / microk8s) | K3s installs in minutes. The cost is everything after — control plane, ingress controllers, RBAC, manifest drift, helm charts. Portoser stays in YAML and Bash you can read. |
| Nomad | Lighter than Kubernetes but still a scheduler with agents on every worker. Portoser is agentless and declarative-first. |
| Coolify / CapRover / Dokku | Single-host or git-push focused. Portoser is multi-host and registry-first — the source of truth is checked into your repo. |
| Portainer | A web UI on top of Docker. Portoser includes a UI too, but the source of truth is registry.yml in version control, not the UI's database. |
What actually ships
Verified against the codebase as of v1.0.0-alpha:
- Agentless multi-host orchestration over SSH —
lib/cluster/ - Three deployment types —
lib/docker.sh,lib/local.sh,lib/native.sh - Self-healing loop —
lib/observe/,lib/diagnose/,lib/solve/,lib/standardize/ - Knowledge base of resolved problems —
~/.portoser/knowledge/playbooks/ - Caddy auto-config with live admin-API reload —
lib/caddy_integration.sh - mTLS with built-in CA distribution —
install_ca_on_hosts.sh - HashiCorp Vault integration with a Portoser-built management UI
- Keycloak OIDC on the backend
- Web UI: cluster view with drag-and-drop, dependency graph (ReactFlow), monitoring dashboard with custom SVG charts, deployment history with rollback, certificates, Vault, MCP tool registry
- WebSocket-streamed deployment logs and metrics
- 500+ tests across the CLI and web backend, including security and race-condition coverage
What's still cooking
- First-party MCP tools — the FastMCP server and tool-registry API are wired; first-party tools that act on the cluster land soon. You can register your own today.
- Frontend login flow for Keycloak — backend middleware is ready; the UI's login page is in progress.
- CI matrix for multi-host scenarios — multi-host paths are tested locally; CI matrix coming.
Who Portoser is for
- Solo developers running side projects across a Mac mini and a couple of Pis.
- Small studios with a handful of machines they want to treat as one cluster.
- Home-lab operators who want declarative orchestration without a control plane to maintain.
- ML engineers splitting CPU services and a single GPU box.
- AI developers experimenting with letting an MCP-equipped assistant run their cluster.
If that sounds like you, the Quickstart gets you to a working deploy in about five minutes.