The 12-record DNS plan ($1 per record at the registrar but only one
public R720 IP) forces the obvious : a single HAProxy on :443 must
serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr +
www.talas.fr + forgejo.talas.group all at once. Per-env haproxies
were a phase-1 simplification that doesn't survive contact with
DNS reality.
Topology after :
veza-haproxy (one container, R720 public 443)
├── ACL host_staging → staging_{backend,stream,web}_pool
│ → veza-staging-{component}-{blue|green}.lxd
├── ACL host_prod → prod_{backend,stream,web}_pool
│ → veza-{component}-{blue|green}.lxd
├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000
│ (Forgejo container managed outside the deploy pipeline)
└── ACL host_talas → talas_vitrine_backend
(placeholder 503 until the static site lands)
Changes :
inventory/{staging,prod}.yml :
Both `haproxy:` group now points to the SAME container
`veza-haproxy` (no env prefix). Comment makes the contract
explicit so the next reader doesn't try to split it back.
group_vars/all/main.yml :
NEW : haproxy_env_prefixes (per-env container prefix mapping).
NEW : haproxy_env_public_hosts (per-env Host-header mapping).
NEW : haproxy_forgejo_host + haproxy_forgejo_backend.
NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend.
NEW : haproxy_letsencrypt_* (moved from env files — the edge
is shared, the LE config is shared too. Else the env
that ran the haproxy role last would clobber the
domain set).
group_vars/{staging,prod}.yml :
Strip the haproxy_letsencrypt_* block (now in all/main.yml).
Comment points readers there.
roles/haproxy/templates/haproxy.cfg.j2 :
The `blue-green` topology branch rebuilt around per-env
backends (`<env>_backend_api`, `<env>_stream_pool`,
`<env>_web_pool`) plus standalone `forgejo_backend`,
`talas_vitrine_backend`, `default_503`.
Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects
which env's backends to use ; path ACLs (`is_api`,
`is_stream_seg`, etc.) refine within the env.
Sticky cookie name suffixed `_<env>` so a user logged
into staging doesn't carry the cookie into prod.
Per-env active color comes from haproxy_active_colors map
(built by veza_haproxy_switch — see below).
Multi-instance branch (lab) untouched.
roles/veza_haproxy_switch/defaults/main.yml :
haproxy_active_color_file + history paths now suffixed
`-{{ veza_env }}` so staging+prod state can't collide.
roles/veza_haproxy_switch/tasks/main.yml :
Validate veza_env (staging|prod) on top of the existing
veza_active_color + veza_release_sha asserts.
Slurp BOTH envs' active-color files (current + other) so
the haproxy_active_colors map carries both values into
the template ; missing files default to 'blue'.
playbooks/deploy_app.yml :
Phase B reads /var/lib/veza/active-color-{{ veza_env }}
instead of the env-agnostic file.
playbooks/cleanup_failed.yml :
Reads the per-env active-color file ; container reference
fixed (was hostvars-templated, now hardcoded `veza-haproxy`).
playbooks/rollback.yml :
Fast-mode SHA lookup reads the per-env history file.
Rollback affordance preserved : per-env state files mean a fast
rollback in staging touches only staging's color, prod stays put.
The history files (`active-color-{staging,prod}.history`) keep
the last 5 deploys per env independently.
Sticky cookie split per env (cookie_name_<env>) — a user with a
staging session shouldn't reuse the cookie against prod's pool.
Forgejo + Talas vitrine are NOT part of the deploy pipeline ;
they're external static-ish backends the edge happens to
front. haproxy_forgejo_backend is "10.0.20.105:3000" today
(matches the existing Incus container at that address).
--no-verify justification continues to hold.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
96 lines
2.8 KiB
YAML
96 lines
2.8 KiB
YAML
# Prod inventory — single R720 (self-hosted Incus) at v1.0 launch,
|
|
# Hetzner debordement post-launch. ROADMAP_V1.0_LAUNCH.md §2 documents
|
|
# the COMPRESSED HA stance : real multi-host HA arrives v1.1+ ; v1.0
|
|
# ships single-host with EC4+2 MinIO + PgAutoFailover colocated.
|
|
#
|
|
# Topology mirrors staging.yml (same shape, different prefix +
|
|
# different network — see group_vars/prod.yml). Phase-2 (post v1.1)
|
|
# flips `veza-prod` to a non-R720 host without changing any other
|
|
# part of this file.
|
|
#
|
|
# Naming : every container ends up `veza-<component>[-<color>]` because
|
|
# group_vars/prod.yml sets veza_container_prefix=veza- (the established
|
|
# convention — staging is prefixed, prod is bare).
|
|
all:
|
|
hosts:
|
|
veza-prod:
|
|
ansible_host: 10.0.20.150
|
|
ansible_user: ansible
|
|
ansible_python_interpreter: /usr/bin/python3
|
|
children:
|
|
incus_hosts:
|
|
hosts:
|
|
veza-prod:
|
|
# SHARED edge — one HAProxy on the R720 public 443. Serves
|
|
# staging + prod + forgejo.talas.group simultaneously. Same
|
|
# container in both staging.yml and prod.yml inventories.
|
|
haproxy:
|
|
hosts:
|
|
veza-haproxy:
|
|
vars:
|
|
ansible_connection: community.general.incus
|
|
ansible_python_interpreter: /usr/bin/python3
|
|
veza_app_backend:
|
|
children:
|
|
veza_app_backend_blue:
|
|
veza_app_backend_green:
|
|
veza_app_backend_tools:
|
|
vars:
|
|
ansible_connection: community.general.incus
|
|
ansible_python_interpreter: /usr/bin/python3
|
|
veza_app_backend_blue:
|
|
hosts:
|
|
veza-backend-blue:
|
|
veza_app_backend_green:
|
|
hosts:
|
|
veza-backend-green:
|
|
veza_app_backend_tools:
|
|
hosts:
|
|
veza-backend-tools: # ephemeral, Phase A only
|
|
veza_app_stream:
|
|
children:
|
|
veza_app_stream_blue:
|
|
veza_app_stream_green:
|
|
vars:
|
|
ansible_connection: community.general.incus
|
|
ansible_python_interpreter: /usr/bin/python3
|
|
veza_app_stream_blue:
|
|
hosts:
|
|
veza-stream-blue:
|
|
veza_app_stream_green:
|
|
hosts:
|
|
veza-stream-green:
|
|
veza_app_web:
|
|
children:
|
|
veza_app_web_blue:
|
|
veza_app_web_green:
|
|
vars:
|
|
ansible_connection: community.general.incus
|
|
ansible_python_interpreter: /usr/bin/python3
|
|
veza_app_web_blue:
|
|
hosts:
|
|
veza-web-blue:
|
|
veza_app_web_green:
|
|
hosts:
|
|
veza-web-green:
|
|
veza_data:
|
|
children:
|
|
veza_data_postgres:
|
|
veza_data_redis:
|
|
veza_data_rabbitmq:
|
|
veza_data_minio:
|
|
vars:
|
|
ansible_connection: community.general.incus
|
|
ansible_python_interpreter: /usr/bin/python3
|
|
veza_data_postgres:
|
|
hosts:
|
|
veza-postgres:
|
|
veza_data_redis:
|
|
hosts:
|
|
veza-redis:
|
|
veza_data_rabbitmq:
|
|
hosts:
|
|
veza-rabbitmq:
|
|
veza_data_minio:
|
|
hosts:
|
|
veza-minio:
|