Frequently Asked Questions

Positioning

What is Portoser, in one sentence?

A declarative, agentless multi-host service orchestrator with a self-healing loop, built for clusters of 2–20 mixed-architecture machines.

How is this different from Kubernetes?

No control plane to maintain, no etcd, no CRDs, no operators, no overlay network. The trade-off is that Portoser is built for clusters in the dozens of hosts, not thousands.

How is this different from Nomad?

Nomad has agents. Portoser does not — it operates over plain SSH. Nomad is a scheduler; Portoser is a declarative deployer with a self-healing loop.

Is Portoser production-ready?

It's at v1.0.0-alpha. The core orchestration, web UI, and 500+ tests across the CLI and web backend are solid. Things still maturing: first-party MCP tools, the frontend Keycloak login flow, and the multi-host CI matrix. See Why Portoser for the full breakdown of what ships and what's still cooking.

Hardware

What's the minimum hardware?

One machine with 8 GB RAM, macOS or Linux. The demo runs in ./compose.sh up.

What's the maximum cluster size?

Tested in the 5–20 host range. Beyond that, a real scheduler is probably the right tool.

Does it work on Raspberry Pi?

Yes. arm64 Linux is first-class. Many users run mixed Pi + x86 + Mac clusters from one registry.

Does it work on macOS?

Yes — both Apple Silicon and Intel. Note: Apple ships Bash 3.2 by default; Portoser requires Bash 5.x. Install via Homebrew (brew install bash) before bootstrapping.

Does it work on Windows?

Not directly. WSL2 should work but isn't on the tested matrix. Patches welcome.

Architecture

Why Bash?

Bash is on every host you'd ever deploy to, predates uv, and trivially shells out. The orchestration core is Bash; the web backend, frontend, and MCP server are Python and JavaScript.

Why no agents on workers?

Agents add a thing to install, monitor, version, and break. SSH is already there. The trade-off is that operations are sequenced one host at a time — that's fine in the cluster sizes Portoser targets.

How is the registry parsed?

yq for YAML, then small Bash helpers in lib/registry.sh. The parser tolerates a few non-strict patterns (env interpolation, includes); everything that should be valid YAML, is.

Does Portoser need a database?

The web backend uses PostgreSQL for deployment history, the MCP tool registry, and audit logs. The CLI alone needs nothing — it reads the registry and writes to ~/.portoser/.

Does it use Redis?

Yes, the web backend caches metrics in Redis to avoid hammering hosts. Optional — the cache layer falls back to an in-process LRU.

Networking & Security

Does Portoser do TLS?

Yes, via Caddy. Caddy's config is generated from the registry and applied through the live admin API.

Does Portoser do mTLS between services?

Yes. lib/certificates.sh generates a CA and per-service certs, and install_ca_on_hosts.sh distributes the CA bundle to workers.

How are secrets stored?

In HashiCorp Vault. Portoser ships a Vault management UI for secrets, and a migrate-all flow to move .env files into Vault.

Is there RBAC?

On the backend: yes, via Keycloak groups. The web UI's login flow that consumes Keycloak tokens is still landing.

Can I expose the web UI to the internet?

Don't, until the frontend Keycloak login flow ships. Front it with a tunnel (WireGuard, Tailscale, Cloudflare Tunnel) or HTTP basic auth via Caddy.

Operations

How do I roll back a bad deploy?

The web UI's Deployment History page (/history) shows every deploy with a one-click rollback. From the CLI:

portoser history list <service>
portoser history rollback <deployment-id>

What happens if a worker host dies mid-deploy?

The deploy is marked failed in the history. The next deploy starts from a clean slate. No partial state is left behind on the worker as long as it comes back; Portoser doesn't write coordinator-only state to workers.

How do I back up a Portoser cluster?

Three things to back up: registry.yml (in git), Vault data (Vault's own backup mechanism), and the Postgres database used by the web backend (pg_dump). Detailed walkthrough is in the Operations section (coming soon).

How do I upgrade Portoser?

git pull && ./compose.sh up. Migrations run automatically. Detailed walkthrough is in the Operations section (coming soon).

MCP & AI integration

Are MCP tools shipped today?

The FastMCP server, the tool registry, and the audit log are operational. First-party tools that act on the cluster are not yet shipped. You can register your own tools today via the MCP Tools reference.

Can I use Claude or Cursor with Portoser today?

Yes — point your MCP client at the FastMCP SSE endpoint and any tools you've registered will be available. First-party tools (deploy, diagnose, etc.) are landing soon.

Will the MCP server be exposed publicly?

Not by default. Treat it the same as the web UI — keep it behind your tunnel.

Clarifications

What does UV do for Portoser?

UV is used for reproducible Python installs — uv sync reads uv.lock to install the exact dependency set. It does not change Python's runtime speed; it speeds up install time, not request handling.

Are drag-and-drop deploys autonomous?

No. Drag-and-drop in the web UI stages a move into a deployment panel. You then click Deploy to apply it. This is intentional — surprise deploys are a bad idea.

I read a doc that mentions a kag/ or kanban/ or tool_registry/ directory and it doesn't exist.

Those were references to subsystems that didn't make the v1 cut. The MCP tool registry now lives at /api/mcp/tools (in the FastAPI backend). KAG and Kanban are not part of v1.0.0-alpha. If you see a stale reference in the docs, please open a PR.

Contributing

Where do I file bugs?

GitHub issues. Include the output of portoser observe <service> and portoser diagnose <service> whenever possible.

Where do I propose features?

GitHub discussions. We try to keep the surface area small — new features generally need a use case that's awkward today and a sketch of the smallest possible implementation.

Is there a Discord / chat?

Not yet. Use GitHub Discussions on the main repo for now.