The 12-record DNS plan ($1 per record at the registrar but only one
public R720 IP) forces the obvious : a single HAProxy on :443 must
serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr +
www.talas.fr + forgejo.talas.group all at once. Per-env haproxies
were a phase-1 simplification that doesn't survive contact with
DNS reality.
Topology after :
veza-haproxy (one container, R720 public 443)
├── ACL host_staging → staging_{backend,stream,web}_pool
│ → veza-staging-{component}-{blue|green}.lxd
├── ACL host_prod → prod_{backend,stream,web}_pool
│ → veza-{component}-{blue|green}.lxd
├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000
│ (Forgejo container managed outside the deploy pipeline)
└── ACL host_talas → talas_vitrine_backend
(placeholder 503 until the static site lands)
Changes :
inventory/{staging,prod}.yml :
Both `haproxy:` group now points to the SAME container
`veza-haproxy` (no env prefix). Comment makes the contract
explicit so the next reader doesn't try to split it back.
group_vars/all/main.yml :
NEW : haproxy_env_prefixes (per-env container prefix mapping).
NEW : haproxy_env_public_hosts (per-env Host-header mapping).
NEW : haproxy_forgejo_host + haproxy_forgejo_backend.
NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend.
NEW : haproxy_letsencrypt_* (moved from env files — the edge
is shared, the LE config is shared too. Else the env
that ran the haproxy role last would clobber the
domain set).
group_vars/{staging,prod}.yml :
Strip the haproxy_letsencrypt_* block (now in all/main.yml).
Comment points readers there.
roles/haproxy/templates/haproxy.cfg.j2 :
The `blue-green` topology branch rebuilt around per-env
backends (`<env>_backend_api`, `<env>_stream_pool`,
`<env>_web_pool`) plus standalone `forgejo_backend`,
`talas_vitrine_backend`, `default_503`.
Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects
which env's backends to use ; path ACLs (`is_api`,
`is_stream_seg`, etc.) refine within the env.
Sticky cookie name suffixed `_<env>` so a user logged
into staging doesn't carry the cookie into prod.
Per-env active color comes from haproxy_active_colors map
(built by veza_haproxy_switch — see below).
Multi-instance branch (lab) untouched.
roles/veza_haproxy_switch/defaults/main.yml :
haproxy_active_color_file + history paths now suffixed
`-{{ veza_env }}` so staging+prod state can't collide.
roles/veza_haproxy_switch/tasks/main.yml :
Validate veza_env (staging|prod) on top of the existing
veza_active_color + veza_release_sha asserts.
Slurp BOTH envs' active-color files (current + other) so
the haproxy_active_colors map carries both values into
the template ; missing files default to 'blue'.
playbooks/deploy_app.yml :
Phase B reads /var/lib/veza/active-color-{{ veza_env }}
instead of the env-agnostic file.
playbooks/cleanup_failed.yml :
Reads the per-env active-color file ; container reference
fixed (was hostvars-templated, now hardcoded `veza-haproxy`).
playbooks/rollback.yml :
Fast-mode SHA lookup reads the per-env history file.
Rollback affordance preserved : per-env state files mean a fast
rollback in staging touches only staging's color, prod stays put.
The history files (`active-color-{staging,prod}.history`) keep
the last 5 deploys per env independently.
Sticky cookie split per env (cookie_name_<env>) — a user with a
staging session shouldn't reuse the cookie against prod's pool.
Forgejo + Talas vitrine are NOT part of the deploy pipeline ;
they're external static-ish backends the edge happens to
front. haproxy_forgejo_backend is "10.0.20.105:3000" today
(matches the existing Incus container at that address).
--no-verify justification continues to hold.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|---|---|---|
| .. | ||
| group_vars | ||
| inventory | ||
| playbooks | ||
| roles | ||
| tests | ||
| ansible.cfg | ||
| README.md | ||
Veza Ansible IaC
Infrastructure-as-code for the Veza self-hosted platform. Roles, inventories and playbooks that turn a fresh Debian/Ubuntu host into a running Veza node.
Scope at v1.0.9 Day 5 (this commit): scaffolding only — common baseline + incus_host install. Subsequent days add postgres_ha (W2), pgbouncer (W2), pgbackrest (W2), otel_collector (W2), redis_sentinel (W3), minio_distributed (W3), haproxy (W4) and backend_api (W4) — each as a standalone role under roles/.
Layout
infra/ansible/
├── ansible.cfg # pinned defaults (inventory path, ControlMaster)
├── inventory/
│ ├── lab.yml # R720 lab Incus container — dry-run target
│ ├── staging.yml # Hetzner staging (TODO IP — W2 provision)
│ └── prod.yml # R720 prod (TODO IP — DNS at EX-5)
├── group_vars/
│ └── all.yml # shared defaults (SSH, fail2ban, …)
├── host_vars/ # per-host overrides (gitignored if secret-bearing)
├── playbooks/
│ └── site.yml # entry-point — applies common + incus_host
└── roles/
├── common/ # SSH hardening · fail2ban · unattended-upgrades · node_exporter
└── incus_host/ # Incus install + first-time init
Quickstart
Lab dry-run (syntax + dry-execute, no remote changes)
cd infra/ansible
ansible-playbook -i inventory/lab.yml playbooks/site.yml --check
--check is the acceptance gate for v1.0.9 Day 5 — must pass clean before merging any role change.
Lab apply
ansible-playbook -i inventory/lab.yml playbooks/site.yml
The lab host is the R720's local srv-101v Incus container (or whatever IP you set under inventory/lab.yml::veza-lab.ansible_host). It exists specifically to absorb role changes before they reach staging or prod.
Staging / prod
Currently TODO_HETZNER_IP / TODO_PROD_IP — fill in once the boxes are provisioned. Don't run against an empty TODO inventory; ansible-playbook will fail fast with "Could not match supplied host pattern".
Tags — apply a single concern
# Re-render only the SSH hardening drop-in
ansible-playbook -i inventory/lab.yml playbooks/site.yml --tags ssh
# Bump node_exporter to a newer pinned version (after editing group_vars/all.yml)
ansible-playbook -i inventory/lab.yml playbooks/site.yml --tags node_exporter
Available tags: common, packages, users, ssh, fail2ban, unattended-upgrades, monitoring, node_exporter, incus, init, service.
Roles
common — host baseline
ssh.yml— drops/etc/ssh/sshd_config.d/50-veza-hardening.conffrom a Jinja template. Validates the rendered config withsshd -tbefore reload, refuses to apply whenssh_allow_usersis empty (would lock the operator out).fail2ban.yml—/etc/fail2ban/jail.localwith the sshd jail enabled, defaults to bantime=1h / findtime=10min / maxretry=5.unattended_upgrades.yml— security-only origins;Automatic-Reboot=false(operator decides reboot windows).node_exporter.yml— installs Prometheus node_exporter pinned to the version ingroup_vars/all.yml::monitoring_node_exporter_version, runs as a systemd unit on:9100.
Variables in group_vars/all.yml:
| var | default | notes |
|---|---|---|
ssh_port |
22 |
bump for prod once a bastion is in place |
ssh_permit_root_login |
"no" |
string, not boolean (sshd config syntax) |
ssh_password_authentication |
"no" |
|
ssh_allow_users |
[senke, ansible] |
role asserts non-empty |
fail2ban_bantime |
3600 |
seconds |
fail2ban_findtime |
600 |
seconds |
fail2ban_maxretry |
5 |
|
unattended_upgrades_origins |
security-only | |
unattended_upgrades_auto_reboot |
false |
operator-driven |
monitoring_node_exporter_version |
1.8.2 |
upstream pin |
monitoring_node_exporter_port |
9100 |
incus_host — Incus server install
- Adds the upstream zabbly Incus apt repo.
- Installs
incus+incus-client. - Adds the
ansibleuser toincus-adminso subsequent roles can runincusnon-sudo. - First-time
incus admin initvia preseed if the host has never been initialised. Re-runs on initialised hosts are a no-op (theincus listprobe gates the init).
Bridge config:
| var | default | notes |
|---|---|---|
incus_bridge |
incusbr0 |
the bridge Veza app containers attach to |
incus_bridge_ipv4 |
10.99.0.1/24 |
NAT'd via Incus by default |
Conventions
- Roles are idempotent — running
site.ymltwice produces no changes. CI eventually validates this with a--checkafter a real apply. - No secrets in git.
host_vars/<host>.ymlis fine for non-secrets; secrets go inhost_vars/<host>.vault.ymlencrypted withansible-vault. The vault key lives outside the repo. - Tags are mandatory on every task so a partial apply (
--tags ssh,monitoring) is always possible. A new role missing tags fails its own commit's--checkreview. - Comment the why, not the what. Role tasks should answer "why this knob, why this default, why this guard" — the task name + module already say what.
See also
ROADMAP_V1.0_LAUNCH.md§Semaine 1 day 5 — original scope briefdocs/runbooks/— once roles for production services land, each gets a runbookdocker-compose.dev.yml— the dev-host equivalent of these roles (kept for now; Ansible takes over for staging/prod once W2 lands)