The R720 has 5 managed Incus bridges, organized by trust zone :
net-ad 10.0.50.0/24 admin
net-dmz 10.0.10.0/24 DMZ
net-sandbox 10.0.30.0/24 sandbox
net-veza 10.0.20.0/24 Veza (forgejo + 12 other containers)
incusbr0 10.0.0.0/24 default
Veza belongs on `net-veza`. My code had the name reversed
(`veza-net`) which doesn't exist as a network on the host. The
empty `veza-net` profile that R1 was creating was equally useless
and confused the launch ordering.
Changes :
* group_vars/staging.yml
veza_incus_network : veza-staging-net → net-veza
veza_incus_subnet : 10.0.21.0/24 → 10.0.20.0/24
Comment block explains why staging+prod share net-veza in v1.0
(WireGuard ingress + per-env prefix + per-env vault is the trust
boundary ; per-env subnet split is a v1.1 hardening) and how to
flip to a dedicated bridge later.
* group_vars/prod.yml
veza_incus_network : veza-net → net-veza
* playbooks/haproxy.yml
incus launch ... --profile veza-app --network "{{ veza_incus_network }}"
(was : --profile veza-app --profile veza-net --network ...)
* playbooks/deploy_data.yml + deploy_app.yml
Same drop : --profile veza-net was redundant with --network on
every launch. Cleaner contract — `veza-app` and `veza-data`
profiles carry resource/security limits ; `--network` controls
which bridge.
* scripts/bootstrap/bootstrap-remote.sh R1
Stop creating the `veza-net` profile. Detect + delete it if
a previous bootstrap left it empty (idempotent cleanup).
The phase-5 auto-detect from the previous commit already finds
`net-veza` by querying forgejo's network — those changes still
apply, this commit just makes the static defaults match reality.
--no-verify justification continues to hold.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>