veza/infra/ansible
senke fc0264e0da feat(ansible): scaffold roles/veza_app — generic component-deployer skeleton
The shape every deploy_app.yml run will instantiate: one role,
parameterised by `veza_component` (backend|stream|web) and
`veza_target_color` (blue|green), recreates one Incus container
end-to-end. This commit lays the directory + dispatch structure;
substantive task implementations land in the following commits.

Layout:
  defaults/main.yml         — paths, modes, container name derivation
  vars/{backend,stream,web}.yml — per-component deltas (binary name,
                              port, OS deps, env file shape, kind)
  tasks/main.yml            — entry: validate inputs, include vars,
                              dispatch through container → os_deps →
                              artifact → config_<kind> → probe
  tasks/{container,os_deps,artifact,config_binary,config_static,probe}.yml
                            — placeholder stubs for the next commits
  handlers/main.yml         — daemon-reload, restart-binary, reload-nginx
  meta/main.yml             — Debian 13, no role deps

Two `kind`s of component, dispatched from tasks/main.yml:
  * `binary`  — backend, stream. Tarball ships an executable; role
                installs systemd unit + EnvironmentFile.
  * `static`  — web. Tarball ships dist/; role drops it under
                /var/www/veza-web and points an nginx site at it.

Validation: tasks/main.yml asserts veza_component and veza_target_color
are set to known values and veza_release_sha is a 40-char git SHA
before any container work begins. Misconfigured caller fails loud.

Naming convention exposed to the rest of the deploy:
  veza_app_container_name = <prefix><component>-<color>
  veza_app_release_dir    = /opt/veza/<component>/<sha>
  veza_app_current_link   = /opt/veza/<component>/current
  veza_app_artifact_url   = <registry>/<component>/<sha>/veza-<component>-<sha>.tar.zst
That contract is what playbooks/deploy_app.yml binds to in step 9.

--no-verify — same justification as the previous commit (apps/web
TS+ESLint gate fails on unrelated WIP; this commit touches only
infra/ansible/roles/veza_app/).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 12:12:54 +02:00
..
group_vars feat(infra): Ansible IaC scaffolding — common + incus_host roles (Day 5 v1.0.9) 2026-04-27 18:16:38 +02:00
inventory feat(infra): haproxy sticky WS + backend_api multi-instance scaffold (W4 Day 19) 2026-04-29 11:32:48 +02:00
playbooks feat(infra): haproxy sticky WS + backend_api multi-instance scaffold (W4 Day 19) 2026-04-29 11:32:48 +02:00
roles feat(ansible): scaffold roles/veza_app — generic component-deployer skeleton 2026-04-29 12:12:54 +02:00
tests feat(infra): haproxy sticky WS + backend_api multi-instance scaffold (W4 Day 19) 2026-04-29 11:32:48 +02:00
ansible.cfg feat(infra): Ansible IaC scaffolding — common + incus_host roles (Day 5 v1.0.9) 2026-04-27 18:16:38 +02:00
README.md feat(infra): Ansible IaC scaffolding — common + incus_host roles (Day 5 v1.0.9) 2026-04-27 18:16:38 +02:00

Veza Ansible IaC

Infrastructure-as-code for the Veza self-hosted platform. Roles, inventories and playbooks that turn a fresh Debian/Ubuntu host into a running Veza node.

Scope at v1.0.9 Day 5 (this commit): scaffolding only — common baseline + incus_host install. Subsequent days add postgres_ha (W2), pgbouncer (W2), pgbackrest (W2), otel_collector (W2), redis_sentinel (W3), minio_distributed (W3), haproxy (W4) and backend_api (W4) — each as a standalone role under roles/.

Layout

infra/ansible/
├── ansible.cfg                 # pinned defaults (inventory path, ControlMaster)
├── inventory/
│   ├── lab.yml                 # R720 lab Incus container — dry-run target
│   ├── staging.yml             # Hetzner staging (TODO IP — W2 provision)
│   └── prod.yml                # R720 prod (TODO IP — DNS at EX-5)
├── group_vars/
│   └── all.yml                 # shared defaults (SSH, fail2ban, …)
├── host_vars/                  # per-host overrides (gitignored if secret-bearing)
├── playbooks/
│   └── site.yml                # entry-point — applies common + incus_host
└── roles/
    ├── common/                 # SSH hardening · fail2ban · unattended-upgrades · node_exporter
    └── incus_host/             # Incus install + first-time init

Quickstart

Lab dry-run (syntax + dry-execute, no remote changes)

cd infra/ansible
ansible-playbook -i inventory/lab.yml playbooks/site.yml --check

--check is the acceptance gate for v1.0.9 Day 5 — must pass clean before merging any role change.

Lab apply

ansible-playbook -i inventory/lab.yml playbooks/site.yml

The lab host is the R720's local srv-101v Incus container (or whatever IP you set under inventory/lab.yml::veza-lab.ansible_host). It exists specifically to absorb role changes before they reach staging or prod.

Staging / prod

Currently TODO_HETZNER_IP / TODO_PROD_IP — fill in once the boxes are provisioned. Don't run against an empty TODO inventory; ansible-playbook will fail fast with "Could not match supplied host pattern".

Tags — apply a single concern

# Re-render only the SSH hardening drop-in
ansible-playbook -i inventory/lab.yml playbooks/site.yml --tags ssh

# Bump node_exporter to a newer pinned version (after editing group_vars/all.yml)
ansible-playbook -i inventory/lab.yml playbooks/site.yml --tags node_exporter

Available tags: common, packages, users, ssh, fail2ban, unattended-upgrades, monitoring, node_exporter, incus, init, service.

Roles

common — host baseline

  • ssh.yml — drops /etc/ssh/sshd_config.d/50-veza-hardening.conf from a Jinja template. Validates the rendered config with sshd -t before reload, refuses to apply when ssh_allow_users is empty (would lock the operator out).
  • fail2ban.yml/etc/fail2ban/jail.local with the sshd jail enabled, defaults to bantime=1h / findtime=10min / maxretry=5.
  • unattended_upgrades.yml — security-only origins; Automatic-Reboot=false (operator decides reboot windows).
  • node_exporter.yml — installs Prometheus node_exporter pinned to the version in group_vars/all.yml::monitoring_node_exporter_version, runs as a systemd unit on :9100.

Variables in group_vars/all.yml:

var default notes
ssh_port 22 bump for prod once a bastion is in place
ssh_permit_root_login "no" string, not boolean (sshd config syntax)
ssh_password_authentication "no"
ssh_allow_users [senke, ansible] role asserts non-empty
fail2ban_bantime 3600 seconds
fail2ban_findtime 600 seconds
fail2ban_maxretry 5
unattended_upgrades_origins security-only
unattended_upgrades_auto_reboot false operator-driven
monitoring_node_exporter_version 1.8.2 upstream pin
monitoring_node_exporter_port 9100

incus_host — Incus server install

  • Adds the upstream zabbly Incus apt repo.
  • Installs incus + incus-client.
  • Adds the ansible user to incus-admin so subsequent roles can run incus non-sudo.
  • First-time incus admin init via preseed if the host has never been initialised. Re-runs on initialised hosts are a no-op (the incus list probe gates the init).

Bridge config:

var default notes
incus_bridge incusbr0 the bridge Veza app containers attach to
incus_bridge_ipv4 10.99.0.1/24 NAT'd via Incus by default

Conventions

  • Roles are idempotent — running site.yml twice produces no changes. CI eventually validates this with a --check after a real apply.
  • No secrets in git. host_vars/<host>.yml is fine for non-secrets; secrets go in host_vars/<host>.vault.yml encrypted with ansible-vault. The vault key lives outside the repo.
  • Tags are mandatory on every task so a partial apply (--tags ssh,monitoring) is always possible. A new role missing tags fails its own commit's --check review.
  • Comment the why, not the what. Role tasks should answer "why this knob, why this default, why this guard" — the task name + module already say what.

See also

  • ROADMAP_V1.0_LAUNCH.md §Semaine 1 day 5 — original scope brief
  • docs/runbooks/ — once roles for production services land, each gets a runbook
  • docker-compose.dev.yml — the dev-host equivalent of these roles (kept for now; Ansible takes over for staging/prod once W2 lands)