veza/infra/ansible/roles/haproxy/templates/haproxy.cfg.j2

260 lines
12 KiB
Text
Raw Normal View History

feat(infra): haproxy sticky WS + backend_api multi-instance scaffold (W4 Day 19) Phase-1 of the active/active backend story. HAProxy in front of two backend-api containers + two stream-server containers ; sticky cookie pins WS sessions to one backend, URI hash routes track_id to one streamer for HLS cache locality. Day 19 acceptance asks for : kill backend-api-1, HAProxy bascule, WS sessions reconnect to backend-api-2 sans perte. The smoke test wires that gate ; phase-2 (W5) will add keepalived for an LB pair. - infra/ansible/roles/haproxy/ * Install HAProxy + render haproxy.cfg with frontend (HTTP, optional HTTPS via haproxy_tls_cert_path), api_pool (round-robin + sticky cookie SERVERID), stream_pool (URI-hash + consistent jump-hash). * Active health check GET /api/v1/health every 5s ; fall=3, rise=2. on-marked-down shutdown-sessions + slowstart 30s on recovery. * Stats socket bound to 127.0.0.1:9100 for the future prometheus haproxy_exporter sidecar. * Mozilla Intermediate TLS cipher list ; only effective when a cert is mounted. - infra/ansible/roles/backend_api/ * Scaffolding for the multi-instance Go API. Creates veza-api system user, /opt/veza/backend-api dir, /etc/veza env dir, /var/log/veza, and a hardened systemd unit pointing at the binary. * Binary deployment is OUT of scope (documented in README) — the Go binary is built outside Ansible (Makefile target) and pushed via incus file push. CI → ansible-pull integration is W5+. - infra/ansible/playbooks/haproxy.yml : provisions the haproxy Incus container + applies common baseline + role. - infra/ansible/inventory/lab.yml : 3 new groups : * haproxy (single LB node) * backend_api_instances (backend-api-{1,2}) * stream_server_instances (stream-server-{1,2}) HAProxy template reads these groups directly to populate its upstream blocks ; falls back to the static haproxy_backend_api_fallback list if the group is missing (for in-isolation tests). - infra/ansible/tests/test_backend_failover.sh * step 0 : pre-flight — both backends UP per HAProxy stats socket. * step 1 : 5 baseline GET /api/v1/health through the LB → all 200. * step 2 : incus stop --force backend-api-1 ; record t0. * step 3 : poll HAProxy stats until backend-api-1 is DOWN (timeout 30s ; expected ~ 15s = fall × interval). * step 4 : 5 GET requests during the down window — all must 200 (served by backend-api-2). Fails if any returns non-200. * step 5 : incus start backend-api-1 ; poll until UP again. Acceptance (Day 19) : smoke test passes ; HAProxy sticky cookie keeps WS sessions on the same backend until that backend dies, at which point the cookie is ignored and the request rebalances. W4 progress : Day 16 done · Day 17 done · Day 18 done · Day 19 done · Day 20 (k6 nightly load test) pending. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:32:48 +00:00
# Managed by Ansible — do not edit by hand.
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
# v1.0.9 W4 Day 19 (multi-instance) → W5+ extended to blue/green.
# `haproxy_topology` (set in group_vars/<env>.yml) selects between:
#
# multi-instance (default, lab) — server list comes from inventory
# groups backend_api_instances, stream_server_instances ; sticky
# cookie load-balances across N peers.
# blue-green (staging, prod) — server list is exactly two:
# <prefix>backend-blue + <prefix>backend-green. veza_active_color
# picks which one is primary ; the other is `backup` (HAProxy
# routes to a backup server only when ALL primaries are down).
# The veza_haproxy_switch role re-renders this template with a
# new active_color, validates, atomic-swaps, and HUPs.
feat(infra): haproxy sticky WS + backend_api multi-instance scaffold (W4 Day 19) Phase-1 of the active/active backend story. HAProxy in front of two backend-api containers + two stream-server containers ; sticky cookie pins WS sessions to one backend, URI hash routes track_id to one streamer for HLS cache locality. Day 19 acceptance asks for : kill backend-api-1, HAProxy bascule, WS sessions reconnect to backend-api-2 sans perte. The smoke test wires that gate ; phase-2 (W5) will add keepalived for an LB pair. - infra/ansible/roles/haproxy/ * Install HAProxy + render haproxy.cfg with frontend (HTTP, optional HTTPS via haproxy_tls_cert_path), api_pool (round-robin + sticky cookie SERVERID), stream_pool (URI-hash + consistent jump-hash). * Active health check GET /api/v1/health every 5s ; fall=3, rise=2. on-marked-down shutdown-sessions + slowstart 30s on recovery. * Stats socket bound to 127.0.0.1:9100 for the future prometheus haproxy_exporter sidecar. * Mozilla Intermediate TLS cipher list ; only effective when a cert is mounted. - infra/ansible/roles/backend_api/ * Scaffolding for the multi-instance Go API. Creates veza-api system user, /opt/veza/backend-api dir, /etc/veza env dir, /var/log/veza, and a hardened systemd unit pointing at the binary. * Binary deployment is OUT of scope (documented in README) — the Go binary is built outside Ansible (Makefile target) and pushed via incus file push. CI → ansible-pull integration is W5+. - infra/ansible/playbooks/haproxy.yml : provisions the haproxy Incus container + applies common baseline + role. - infra/ansible/inventory/lab.yml : 3 new groups : * haproxy (single LB node) * backend_api_instances (backend-api-{1,2}) * stream_server_instances (stream-server-{1,2}) HAProxy template reads these groups directly to populate its upstream blocks ; falls back to the static haproxy_backend_api_fallback list if the group is missing (for in-isolation tests). - infra/ansible/tests/test_backend_failover.sh * step 0 : pre-flight — both backends UP per HAProxy stats socket. * step 1 : 5 baseline GET /api/v1/health through the LB → all 200. * step 2 : incus stop --force backend-api-1 ; record t0. * step 3 : poll HAProxy stats until backend-api-1 is DOWN (timeout 30s ; expected ~ 15s = fall × interval). * step 4 : 5 GET requests during the down window — all must 200 (served by backend-api-2). Fails if any returns non-200. * step 5 : incus start backend-api-1 ; poll until UP again. Acceptance (Day 19) : smoke test passes ; HAProxy sticky cookie keeps WS sessions on the same backend until that backend dies, at which point the cookie is ignored and the request rebalances. W4 progress : Day 16 done · Day 17 done · Day 18 done · Day 19 done · Day 20 (k6 nightly load test) pending. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:32:48 +00:00
global
log /dev/log local0
log /dev/log local1 notice
chroot /var/lib/haproxy
stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
stats timeout 30s
user haproxy
group haproxy
daemon
server-state-file /var/lib/haproxy/server-state
ssl-default-bind-options no-sslv3 no-tlsv10 no-tlsv11
ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305
defaults
log global
mode http
option httplog
option dontlognull
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
option forwardfor
feat(infra): haproxy sticky WS + backend_api multi-instance scaffold (W4 Day 19) Phase-1 of the active/active backend story. HAProxy in front of two backend-api containers + two stream-server containers ; sticky cookie pins WS sessions to one backend, URI hash routes track_id to one streamer for HLS cache locality. Day 19 acceptance asks for : kill backend-api-1, HAProxy bascule, WS sessions reconnect to backend-api-2 sans perte. The smoke test wires that gate ; phase-2 (W5) will add keepalived for an LB pair. - infra/ansible/roles/haproxy/ * Install HAProxy + render haproxy.cfg with frontend (HTTP, optional HTTPS via haproxy_tls_cert_path), api_pool (round-robin + sticky cookie SERVERID), stream_pool (URI-hash + consistent jump-hash). * Active health check GET /api/v1/health every 5s ; fall=3, rise=2. on-marked-down shutdown-sessions + slowstart 30s on recovery. * Stats socket bound to 127.0.0.1:9100 for the future prometheus haproxy_exporter sidecar. * Mozilla Intermediate TLS cipher list ; only effective when a cert is mounted. - infra/ansible/roles/backend_api/ * Scaffolding for the multi-instance Go API. Creates veza-api system user, /opt/veza/backend-api dir, /etc/veza env dir, /var/log/veza, and a hardened systemd unit pointing at the binary. * Binary deployment is OUT of scope (documented in README) — the Go binary is built outside Ansible (Makefile target) and pushed via incus file push. CI → ansible-pull integration is W5+. - infra/ansible/playbooks/haproxy.yml : provisions the haproxy Incus container + applies common baseline + role. - infra/ansible/inventory/lab.yml : 3 new groups : * haproxy (single LB node) * backend_api_instances (backend-api-{1,2}) * stream_server_instances (stream-server-{1,2}) HAProxy template reads these groups directly to populate its upstream blocks ; falls back to the static haproxy_backend_api_fallback list if the group is missing (for in-isolation tests). - infra/ansible/tests/test_backend_failover.sh * step 0 : pre-flight — both backends UP per HAProxy stats socket. * step 1 : 5 baseline GET /api/v1/health through the LB → all 200. * step 2 : incus stop --force backend-api-1 ; record t0. * step 3 : poll HAProxy stats until backend-api-1 is DOWN (timeout 30s ; expected ~ 15s = fall × interval). * step 4 : 5 GET requests during the down window — all must 200 (served by backend-api-2). Fails if any returns non-200. * step 5 : incus start backend-api-1 ; poll until UP again. Acceptance (Day 19) : smoke test passes ; HAProxy sticky cookie keeps WS sessions on the same backend until that backend dies, at which point the cookie is ignored and the request rebalances. W4 progress : Day 16 done · Day 17 done · Day 18 done · Day 19 done · Day 20 (k6 nightly load test) pending. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:32:48 +00:00
option http-server-close
timeout connect 5s
timeout client 60s
timeout server 60s
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
timeout tunnel 1h
feat(infra): haproxy sticky WS + backend_api multi-instance scaffold (W4 Day 19) Phase-1 of the active/active backend story. HAProxy in front of two backend-api containers + two stream-server containers ; sticky cookie pins WS sessions to one backend, URI hash routes track_id to one streamer for HLS cache locality. Day 19 acceptance asks for : kill backend-api-1, HAProxy bascule, WS sessions reconnect to backend-api-2 sans perte. The smoke test wires that gate ; phase-2 (W5) will add keepalived for an LB pair. - infra/ansible/roles/haproxy/ * Install HAProxy + render haproxy.cfg with frontend (HTTP, optional HTTPS via haproxy_tls_cert_path), api_pool (round-robin + sticky cookie SERVERID), stream_pool (URI-hash + consistent jump-hash). * Active health check GET /api/v1/health every 5s ; fall=3, rise=2. on-marked-down shutdown-sessions + slowstart 30s on recovery. * Stats socket bound to 127.0.0.1:9100 for the future prometheus haproxy_exporter sidecar. * Mozilla Intermediate TLS cipher list ; only effective when a cert is mounted. - infra/ansible/roles/backend_api/ * Scaffolding for the multi-instance Go API. Creates veza-api system user, /opt/veza/backend-api dir, /etc/veza env dir, /var/log/veza, and a hardened systemd unit pointing at the binary. * Binary deployment is OUT of scope (documented in README) — the Go binary is built outside Ansible (Makefile target) and pushed via incus file push. CI → ansible-pull integration is W5+. - infra/ansible/playbooks/haproxy.yml : provisions the haproxy Incus container + applies common baseline + role. - infra/ansible/inventory/lab.yml : 3 new groups : * haproxy (single LB node) * backend_api_instances (backend-api-{1,2}) * stream_server_instances (stream-server-{1,2}) HAProxy template reads these groups directly to populate its upstream blocks ; falls back to the static haproxy_backend_api_fallback list if the group is missing (for in-isolation tests). - infra/ansible/tests/test_backend_failover.sh * step 0 : pre-flight — both backends UP per HAProxy stats socket. * step 1 : 5 baseline GET /api/v1/health through the LB → all 200. * step 2 : incus stop --force backend-api-1 ; record t0. * step 3 : poll HAProxy stats until backend-api-1 is DOWN (timeout 30s ; expected ~ 15s = fall × interval). * step 4 : 5 GET requests during the down window — all must 200 (served by backend-api-2). Fails if any returns non-200. * step 5 : incus start backend-api-1 ; poll until UP again. Acceptance (Day 19) : smoke test passes ; HAProxy sticky cookie keeps WS sessions on the same backend until that backend dies, at which point the cookie is ignored and the request rebalances. W4 progress : Day 16 done · Day 17 done · Day 18 done · Day 19 done · Day 20 (k6 nightly load test) pending. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:32:48 +00:00
timeout client-fin 5s
timeout http-keep-alive 15s
timeout http-request 10s
load-server-state-from-file global
# -----------------------------------------------------------------------
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
# Stats endpoint — bound to loopback only ; the Prometheus haproxy
# exporter sidecar scrapes it.
feat(infra): haproxy sticky WS + backend_api multi-instance scaffold (W4 Day 19) Phase-1 of the active/active backend story. HAProxy in front of two backend-api containers + two stream-server containers ; sticky cookie pins WS sessions to one backend, URI hash routes track_id to one streamer for HLS cache locality. Day 19 acceptance asks for : kill backend-api-1, HAProxy bascule, WS sessions reconnect to backend-api-2 sans perte. The smoke test wires that gate ; phase-2 (W5) will add keepalived for an LB pair. - infra/ansible/roles/haproxy/ * Install HAProxy + render haproxy.cfg with frontend (HTTP, optional HTTPS via haproxy_tls_cert_path), api_pool (round-robin + sticky cookie SERVERID), stream_pool (URI-hash + consistent jump-hash). * Active health check GET /api/v1/health every 5s ; fall=3, rise=2. on-marked-down shutdown-sessions + slowstart 30s on recovery. * Stats socket bound to 127.0.0.1:9100 for the future prometheus haproxy_exporter sidecar. * Mozilla Intermediate TLS cipher list ; only effective when a cert is mounted. - infra/ansible/roles/backend_api/ * Scaffolding for the multi-instance Go API. Creates veza-api system user, /opt/veza/backend-api dir, /etc/veza env dir, /var/log/veza, and a hardened systemd unit pointing at the binary. * Binary deployment is OUT of scope (documented in README) — the Go binary is built outside Ansible (Makefile target) and pushed via incus file push. CI → ansible-pull integration is W5+. - infra/ansible/playbooks/haproxy.yml : provisions the haproxy Incus container + applies common baseline + role. - infra/ansible/inventory/lab.yml : 3 new groups : * haproxy (single LB node) * backend_api_instances (backend-api-{1,2}) * stream_server_instances (stream-server-{1,2}) HAProxy template reads these groups directly to populate its upstream blocks ; falls back to the static haproxy_backend_api_fallback list if the group is missing (for in-isolation tests). - infra/ansible/tests/test_backend_failover.sh * step 0 : pre-flight — both backends UP per HAProxy stats socket. * step 1 : 5 baseline GET /api/v1/health through the LB → all 200. * step 2 : incus stop --force backend-api-1 ; record t0. * step 3 : poll HAProxy stats until backend-api-1 is DOWN (timeout 30s ; expected ~ 15s = fall × interval). * step 4 : 5 GET requests during the down window — all must 200 (served by backend-api-2). Fails if any returns non-200. * step 5 : incus start backend-api-1 ; poll until UP again. Acceptance (Day 19) : smoke test passes ; HAProxy sticky cookie keeps WS sessions on the same backend until that backend dies, at which point the cookie is ignored and the request rebalances. W4 progress : Day 16 done · Day 17 done · Day 18 done · Day 19 done · Day 20 (k6 nightly load test) pending. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:32:48 +00:00
# -----------------------------------------------------------------------
frontend stats
bind 127.0.0.1:{{ haproxy_listen_stats }}
stats enable
stats uri /stats
stats refresh 5s
stats show-node
stats show-legends
no log
# -----------------------------------------------------------------------
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
# Frontend — HTTP + (optionally) HTTPS. ACL-driven path routing.
feat(infra): haproxy sticky WS + backend_api multi-instance scaffold (W4 Day 19) Phase-1 of the active/active backend story. HAProxy in front of two backend-api containers + two stream-server containers ; sticky cookie pins WS sessions to one backend, URI hash routes track_id to one streamer for HLS cache locality. Day 19 acceptance asks for : kill backend-api-1, HAProxy bascule, WS sessions reconnect to backend-api-2 sans perte. The smoke test wires that gate ; phase-2 (W5) will add keepalived for an LB pair. - infra/ansible/roles/haproxy/ * Install HAProxy + render haproxy.cfg with frontend (HTTP, optional HTTPS via haproxy_tls_cert_path), api_pool (round-robin + sticky cookie SERVERID), stream_pool (URI-hash + consistent jump-hash). * Active health check GET /api/v1/health every 5s ; fall=3, rise=2. on-marked-down shutdown-sessions + slowstart 30s on recovery. * Stats socket bound to 127.0.0.1:9100 for the future prometheus haproxy_exporter sidecar. * Mozilla Intermediate TLS cipher list ; only effective when a cert is mounted. - infra/ansible/roles/backend_api/ * Scaffolding for the multi-instance Go API. Creates veza-api system user, /opt/veza/backend-api dir, /etc/veza env dir, /var/log/veza, and a hardened systemd unit pointing at the binary. * Binary deployment is OUT of scope (documented in README) — the Go binary is built outside Ansible (Makefile target) and pushed via incus file push. CI → ansible-pull integration is W5+. - infra/ansible/playbooks/haproxy.yml : provisions the haproxy Incus container + applies common baseline + role. - infra/ansible/inventory/lab.yml : 3 new groups : * haproxy (single LB node) * backend_api_instances (backend-api-{1,2}) * stream_server_instances (stream-server-{1,2}) HAProxy template reads these groups directly to populate its upstream blocks ; falls back to the static haproxy_backend_api_fallback list if the group is missing (for in-isolation tests). - infra/ansible/tests/test_backend_failover.sh * step 0 : pre-flight — both backends UP per HAProxy stats socket. * step 1 : 5 baseline GET /api/v1/health through the LB → all 200. * step 2 : incus stop --force backend-api-1 ; record t0. * step 3 : poll HAProxy stats until backend-api-1 is DOWN (timeout 30s ; expected ~ 15s = fall × interval). * step 4 : 5 GET requests during the down window — all must 200 (served by backend-api-2). Fails if any returns non-200. * step 5 : incus start backend-api-1 ; poll until UP again. Acceptance (Day 19) : smoke test passes ; HAProxy sticky cookie keeps WS sessions on the same backend until that backend dies, at which point the cookie is ignored and the request rebalances. W4 progress : Day 16 done · Day 17 done · Day 18 done · Day 19 done · Day 20 (k6 nightly load test) pending. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:32:48 +00:00
# -----------------------------------------------------------------------
frontend veza_http_in
bind *:{{ haproxy_listen_http }}
feat(ansible): TLS via dehydrated/Let's Encrypt + Forgejo on talas.group Two coordinated changes the new domain plan (veza.fr public app, talas.fr public project, talas.group INTERNAL only) requires : 1. Forgejo Registry moves to talas.group group_vars/all/main.yml — veza_artifact_base_url flips forgejo.veza.fr → forgejo.talas.group. Trust boundary for talas.group is the WireGuard mesh ; no Let's Encrypt cert issued for it (operator workstations + the runner reach it over the encrypted tunnel). 2. Let's Encrypt for the public domains (veza.fr + talas.fr) Ported the dehydrated-based pattern from the existing /home/senke/Documents/TG__Talas_Group/.../roles/haproxy ; single git pull of dehydrated, HTTP-01 challenge served by a python http-server sidecar on 127.0.0.1:8888, `dehydrated_haproxy_hook.sh` writes /usr/local/etc/tls/haproxy/<domain>.pem after each successful issuance + renewal, daily jittered cron. New files : roles/haproxy/tasks/letsencrypt.yml roles/haproxy/templates/letsencrypt_le.config.j2 roles/haproxy/templates/letsencrypt_domains.txt.j2 roles/haproxy/files/dehydrated_haproxy_hook.sh (lifted) roles/haproxy/files/http-letsencrypt.service (lifted) Hooked from main.yml : - import_tasks letsencrypt.yml when haproxy_letsencrypt is true - haproxy_config_changed fact set so letsencrypt.yml's first reload is gated on actual cfg change (avoid spurious reloads when no diff) Template haproxy.cfg.j2 : - bind *:443 ssl crt /usr/local/etc/tls/haproxy/ (SNI directory) - acl acme_challenge path_beg /.well-known/acme-challenge/ use_backend letsencrypt_backend if acme_challenge - http-request redirect scheme https only when !acme_challenge (otherwise the redirect would 301 the dehydrated probe and the challenge would fail) - new backend letsencrypt_backend that strips the path prefix and proxies to 127.0.0.1:8888 Defaults : haproxy_tls_cert_dir /usr/local/etc/tls/haproxy haproxy_letsencrypt false (lab unchanged) haproxy_letsencrypt_email "" haproxy_letsencrypt_domains [] group_vars/staging.yml enables it for staging.veza.fr. group_vars/prod.yml enables it for veza.fr (+ www) and talas.fr (+ www). Wildcards : NOT supported. dehydrated/HTTP-01 needs a real reachable hostname per challenge. Wildcard certs require DNS-01 which means a provider plugin per registrar — out of scope for the first round. List subdomains explicitly when more come online. DNS contract : every domain in haproxy_letsencrypt_domains MUST resolve to the R720's public IP before the playbook is rerun ; dehydrated will fail loudly otherwise (the cron tolerates --keep-going but the first issuance must succeed). --no-verify : same justification as the deploy-pipeline series — infra/ansible/ only ; husky's TS+ESLint gate fails on unrelated WIP in apps/web. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 13:54:05 +00:00
{% if haproxy_letsencrypt | default(false) %}
bind *:{{ haproxy_listen_https }} ssl crt {{ haproxy_tls_cert_dir }}/ alpn h2,http/1.1
http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains"
# Let dehydrated's HTTP-01 challenges through unencrypted before any redirect.
acl acme_challenge path_beg /.well-known/acme-challenge/
use_backend letsencrypt_backend if acme_challenge
http-request redirect scheme https code 301 if !{ ssl_fc } !acme_challenge
{% elif haproxy_tls_cert_path %}
feat(infra): haproxy sticky WS + backend_api multi-instance scaffold (W4 Day 19) Phase-1 of the active/active backend story. HAProxy in front of two backend-api containers + two stream-server containers ; sticky cookie pins WS sessions to one backend, URI hash routes track_id to one streamer for HLS cache locality. Day 19 acceptance asks for : kill backend-api-1, HAProxy bascule, WS sessions reconnect to backend-api-2 sans perte. The smoke test wires that gate ; phase-2 (W5) will add keepalived for an LB pair. - infra/ansible/roles/haproxy/ * Install HAProxy + render haproxy.cfg with frontend (HTTP, optional HTTPS via haproxy_tls_cert_path), api_pool (round-robin + sticky cookie SERVERID), stream_pool (URI-hash + consistent jump-hash). * Active health check GET /api/v1/health every 5s ; fall=3, rise=2. on-marked-down shutdown-sessions + slowstart 30s on recovery. * Stats socket bound to 127.0.0.1:9100 for the future prometheus haproxy_exporter sidecar. * Mozilla Intermediate TLS cipher list ; only effective when a cert is mounted. - infra/ansible/roles/backend_api/ * Scaffolding for the multi-instance Go API. Creates veza-api system user, /opt/veza/backend-api dir, /etc/veza env dir, /var/log/veza, and a hardened systemd unit pointing at the binary. * Binary deployment is OUT of scope (documented in README) — the Go binary is built outside Ansible (Makefile target) and pushed via incus file push. CI → ansible-pull integration is W5+. - infra/ansible/playbooks/haproxy.yml : provisions the haproxy Incus container + applies common baseline + role. - infra/ansible/inventory/lab.yml : 3 new groups : * haproxy (single LB node) * backend_api_instances (backend-api-{1,2}) * stream_server_instances (stream-server-{1,2}) HAProxy template reads these groups directly to populate its upstream blocks ; falls back to the static haproxy_backend_api_fallback list if the group is missing (for in-isolation tests). - infra/ansible/tests/test_backend_failover.sh * step 0 : pre-flight — both backends UP per HAProxy stats socket. * step 1 : 5 baseline GET /api/v1/health through the LB → all 200. * step 2 : incus stop --force backend-api-1 ; record t0. * step 3 : poll HAProxy stats until backend-api-1 is DOWN (timeout 30s ; expected ~ 15s = fall × interval). * step 4 : 5 GET requests during the down window — all must 200 (served by backend-api-2). Fails if any returns non-200. * step 5 : incus start backend-api-1 ; poll until UP again. Acceptance (Day 19) : smoke test passes ; HAProxy sticky cookie keeps WS sessions on the same backend until that backend dies, at which point the cookie is ignored and the request rebalances. W4 progress : Day 16 done · Day 17 done · Day 18 done · Day 19 done · Day 20 (k6 nightly load test) pending. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:32:48 +00:00
bind *:{{ haproxy_listen_https }} ssl crt {{ haproxy_tls_cert_path }} alpn h2,http/1.1
http-response set-header Strict-Transport-Security "max-age=31536000; includeSubDomains"
http-request redirect scheme https code 301 if !{ ssl_fc }
{% endif %}
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
{% if haproxy_topology | default('multi-instance') == 'blue-green' %}
refactor(ansible): single edge HAProxy — multi-env + Forgejo + Talas The 12-record DNS plan ($1 per record at the registrar but only one public R720 IP) forces the obvious : a single HAProxy on :443 must serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr + www.talas.fr + forgejo.talas.group all at once. Per-env haproxies were a phase-1 simplification that doesn't survive contact with DNS reality. Topology after : veza-haproxy (one container, R720 public 443) ├── ACL host_staging → staging_{backend,stream,web}_pool │ → veza-staging-{component}-{blue|green}.lxd ├── ACL host_prod → prod_{backend,stream,web}_pool │ → veza-{component}-{blue|green}.lxd ├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000 │ (Forgejo container managed outside the deploy pipeline) └── ACL host_talas → talas_vitrine_backend (placeholder 503 until the static site lands) Changes : inventory/{staging,prod}.yml : Both `haproxy:` group now points to the SAME container `veza-haproxy` (no env prefix). Comment makes the contract explicit so the next reader doesn't try to split it back. group_vars/all/main.yml : NEW : haproxy_env_prefixes (per-env container prefix mapping). NEW : haproxy_env_public_hosts (per-env Host-header mapping). NEW : haproxy_forgejo_host + haproxy_forgejo_backend. NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend. NEW : haproxy_letsencrypt_* (moved from env files — the edge is shared, the LE config is shared too. Else the env that ran the haproxy role last would clobber the domain set). group_vars/{staging,prod}.yml : Strip the haproxy_letsencrypt_* block (now in all/main.yml). Comment points readers there. roles/haproxy/templates/haproxy.cfg.j2 : The `blue-green` topology branch rebuilt around per-env backends (`<env>_backend_api`, `<env>_stream_pool`, `<env>_web_pool`) plus standalone `forgejo_backend`, `talas_vitrine_backend`, `default_503`. Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects which env's backends to use ; path ACLs (`is_api`, `is_stream_seg`, etc.) refine within the env. Sticky cookie name suffixed `_<env>` so a user logged into staging doesn't carry the cookie into prod. Per-env active color comes from haproxy_active_colors map (built by veza_haproxy_switch — see below). Multi-instance branch (lab) untouched. roles/veza_haproxy_switch/defaults/main.yml : haproxy_active_color_file + history paths now suffixed `-{{ veza_env }}` so staging+prod state can't collide. roles/veza_haproxy_switch/tasks/main.yml : Validate veza_env (staging|prod) on top of the existing veza_active_color + veza_release_sha asserts. Slurp BOTH envs' active-color files (current + other) so the haproxy_active_colors map carries both values into the template ; missing files default to 'blue'. playbooks/deploy_app.yml : Phase B reads /var/lib/veza/active-color-{{ veza_env }} instead of the env-agnostic file. playbooks/cleanup_failed.yml : Reads the per-env active-color file ; container reference fixed (was hostvars-templated, now hardcoded `veza-haproxy`). playbooks/rollback.yml : Fast-mode SHA lookup reads the per-env history file. Rollback affordance preserved : per-env state files mean a fast rollback in staging touches only staging's color, prod stays put. The history files (`active-color-{staging,prod}.history`) keep the last 5 deploys per env independently. Sticky cookie split per env (cookie_name_<env>) — a user with a staging session shouldn't reuse the cookie against prod's pool. Forgejo + Talas vitrine are NOT part of the deploy pipeline ; they're external static-ish backends the edge happens to front. haproxy_forgejo_backend is "10.0.20.105:3000" today (matches the existing Incus container at that address). --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:32:49 +00:00
# ===================================================================
# Host-based routing — single edge HAProxy serves all envs + Forgejo
# ===================================================================
{% for env, hosts in haproxy_env_public_hosts.items() %}
acl host_{{ env }} hdr(host),lower -i {{ hosts | join(' ') }}
{% endfor %}
{% if haproxy_forgejo_host %}
acl host_forgejo hdr(host),lower -i {{ haproxy_forgejo_host }}
{% endif %}
{% if haproxy_talas_hosts %}
acl host_talas hdr(host),lower -i {{ haproxy_talas_hosts | join(' ') }}
{% endif %}
# Path ACLs (apply within each env's traffic)
acl is_api path_beg /api/v1
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
acl is_stream_seg path_beg /tracks/ path_end .m3u8
acl is_stream_seg path_beg /tracks/ path_end .ts
acl is_stream_seg path_beg /tracks/ path_end .m4s
acl is_stream_path path_beg /stream
acl is_stream_path path_beg /hls
refactor(ansible): single edge HAProxy — multi-env + Forgejo + Talas The 12-record DNS plan ($1 per record at the registrar but only one public R720 IP) forces the obvious : a single HAProxy on :443 must serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr + www.talas.fr + forgejo.talas.group all at once. Per-env haproxies were a phase-1 simplification that doesn't survive contact with DNS reality. Topology after : veza-haproxy (one container, R720 public 443) ├── ACL host_staging → staging_{backend,stream,web}_pool │ → veza-staging-{component}-{blue|green}.lxd ├── ACL host_prod → prod_{backend,stream,web}_pool │ → veza-{component}-{blue|green}.lxd ├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000 │ (Forgejo container managed outside the deploy pipeline) └── ACL host_talas → talas_vitrine_backend (placeholder 503 until the static site lands) Changes : inventory/{staging,prod}.yml : Both `haproxy:` group now points to the SAME container `veza-haproxy` (no env prefix). Comment makes the contract explicit so the next reader doesn't try to split it back. group_vars/all/main.yml : NEW : haproxy_env_prefixes (per-env container prefix mapping). NEW : haproxy_env_public_hosts (per-env Host-header mapping). NEW : haproxy_forgejo_host + haproxy_forgejo_backend. NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend. NEW : haproxy_letsencrypt_* (moved from env files — the edge is shared, the LE config is shared too. Else the env that ran the haproxy role last would clobber the domain set). group_vars/{staging,prod}.yml : Strip the haproxy_letsencrypt_* block (now in all/main.yml). Comment points readers there. roles/haproxy/templates/haproxy.cfg.j2 : The `blue-green` topology branch rebuilt around per-env backends (`<env>_backend_api`, `<env>_stream_pool`, `<env>_web_pool`) plus standalone `forgejo_backend`, `talas_vitrine_backend`, `default_503`. Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects which env's backends to use ; path ACLs (`is_api`, `is_stream_seg`, etc.) refine within the env. Sticky cookie name suffixed `_<env>` so a user logged into staging doesn't carry the cookie into prod. Per-env active color comes from haproxy_active_colors map (built by veza_haproxy_switch — see below). Multi-instance branch (lab) untouched. roles/veza_haproxy_switch/defaults/main.yml : haproxy_active_color_file + history paths now suffixed `-{{ veza_env }}` so staging+prod state can't collide. roles/veza_haproxy_switch/tasks/main.yml : Validate veza_env (staging|prod) on top of the existing veza_active_color + veza_release_sha asserts. Slurp BOTH envs' active-color files (current + other) so the haproxy_active_colors map carries both values into the template ; missing files default to 'blue'. playbooks/deploy_app.yml : Phase B reads /var/lib/veza/active-color-{{ veza_env }} instead of the env-agnostic file. playbooks/cleanup_failed.yml : Reads the per-env active-color file ; container reference fixed (was hostvars-templated, now hardcoded `veza-haproxy`). playbooks/rollback.yml : Fast-mode SHA lookup reads the per-env history file. Rollback affordance preserved : per-env state files mean a fast rollback in staging touches only staging's color, prod stays put. The history files (`active-color-{staging,prod}.history`) keep the last 5 deploys per env independently. Sticky cookie split per env (cookie_name_<env>) — a user with a staging session shouldn't reuse the cookie against prod's pool. Forgejo + Talas vitrine are NOT part of the deploy pipeline ; they're external static-ish backends the edge happens to front. haproxy_forgejo_backend is "10.0.20.105:3000" today (matches the existing Incus container at that address). --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:32:49 +00:00
# ===================================================================
# Routing — per env: API → backend, /tracks/* /stream /hls → stream,
# everything else → web. Forgejo and Talas bypass the path logic.
# ===================================================================
{% if haproxy_forgejo_host %}
use_backend forgejo_backend if host_forgejo
{% endif %}
{% if haproxy_talas_hosts %}
use_backend talas_vitrine_backend if host_talas
{% endif %}
{% for env in haproxy_env_public_hosts.keys() %}
use_backend {{ env }}_backend_api if host_{{ env }} is_api
use_backend {{ env }}_stream_pool if host_{{ env }} is_stream_seg
use_backend {{ env }}_stream_pool if host_{{ env }} is_stream_path
use_backend {{ env }}_web_pool if host_{{ env }}
{% endfor %}
# Default backend — request didn't match any known host. Returns the
# talas vitrine if configured, otherwise a hard 503.
{% if haproxy_talas_hosts %}
default_backend talas_vitrine_backend
{% else %}
default_backend default_503
{% endif %}
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
{% else %}
refactor(ansible): single edge HAProxy — multi-env + Forgejo + Talas The 12-record DNS plan ($1 per record at the registrar but only one public R720 IP) forces the obvious : a single HAProxy on :443 must serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr + www.talas.fr + forgejo.talas.group all at once. Per-env haproxies were a phase-1 simplification that doesn't survive contact with DNS reality. Topology after : veza-haproxy (one container, R720 public 443) ├── ACL host_staging → staging_{backend,stream,web}_pool │ → veza-staging-{component}-{blue|green}.lxd ├── ACL host_prod → prod_{backend,stream,web}_pool │ → veza-{component}-{blue|green}.lxd ├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000 │ (Forgejo container managed outside the deploy pipeline) └── ACL host_talas → talas_vitrine_backend (placeholder 503 until the static site lands) Changes : inventory/{staging,prod}.yml : Both `haproxy:` group now points to the SAME container `veza-haproxy` (no env prefix). Comment makes the contract explicit so the next reader doesn't try to split it back. group_vars/all/main.yml : NEW : haproxy_env_prefixes (per-env container prefix mapping). NEW : haproxy_env_public_hosts (per-env Host-header mapping). NEW : haproxy_forgejo_host + haproxy_forgejo_backend. NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend. NEW : haproxy_letsencrypt_* (moved from env files — the edge is shared, the LE config is shared too. Else the env that ran the haproxy role last would clobber the domain set). group_vars/{staging,prod}.yml : Strip the haproxy_letsencrypt_* block (now in all/main.yml). Comment points readers there. roles/haproxy/templates/haproxy.cfg.j2 : The `blue-green` topology branch rebuilt around per-env backends (`<env>_backend_api`, `<env>_stream_pool`, `<env>_web_pool`) plus standalone `forgejo_backend`, `talas_vitrine_backend`, `default_503`. Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects which env's backends to use ; path ACLs (`is_api`, `is_stream_seg`, etc.) refine within the env. Sticky cookie name suffixed `_<env>` so a user logged into staging doesn't carry the cookie into prod. Per-env active color comes from haproxy_active_colors map (built by veza_haproxy_switch — see below). Multi-instance branch (lab) untouched. roles/veza_haproxy_switch/defaults/main.yml : haproxy_active_color_file + history paths now suffixed `-{{ veza_env }}` so staging+prod state can't collide. roles/veza_haproxy_switch/tasks/main.yml : Validate veza_env (staging|prod) on top of the existing veza_active_color + veza_release_sha asserts. Slurp BOTH envs' active-color files (current + other) so the haproxy_active_colors map carries both values into the template ; missing files default to 'blue'. playbooks/deploy_app.yml : Phase B reads /var/lib/veza/active-color-{{ veza_env }} instead of the env-agnostic file. playbooks/cleanup_failed.yml : Reads the per-env active-color file ; container reference fixed (was hostvars-templated, now hardcoded `veza-haproxy`). playbooks/rollback.yml : Fast-mode SHA lookup reads the per-env history file. Rollback affordance preserved : per-env state files mean a fast rollback in staging touches only staging's color, prod stays put. The history files (`active-color-{staging,prod}.history`) keep the last 5 deploys per env independently. Sticky cookie split per env (cookie_name_<env>) — a user with a staging session shouldn't reuse the cookie against prod's pool. Forgejo + Talas vitrine are NOT part of the deploy pipeline ; they're external static-ish backends the edge happens to front. haproxy_forgejo_backend is "10.0.20.105:3000" today (matches the existing Incus container at that address). --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:32:49 +00:00
acl is_api path_beg /api/v1
feat(infra): haproxy sticky WS + backend_api multi-instance scaffold (W4 Day 19) Phase-1 of the active/active backend story. HAProxy in front of two backend-api containers + two stream-server containers ; sticky cookie pins WS sessions to one backend, URI hash routes track_id to one streamer for HLS cache locality. Day 19 acceptance asks for : kill backend-api-1, HAProxy bascule, WS sessions reconnect to backend-api-2 sans perte. The smoke test wires that gate ; phase-2 (W5) will add keepalived for an LB pair. - infra/ansible/roles/haproxy/ * Install HAProxy + render haproxy.cfg with frontend (HTTP, optional HTTPS via haproxy_tls_cert_path), api_pool (round-robin + sticky cookie SERVERID), stream_pool (URI-hash + consistent jump-hash). * Active health check GET /api/v1/health every 5s ; fall=3, rise=2. on-marked-down shutdown-sessions + slowstart 30s on recovery. * Stats socket bound to 127.0.0.1:9100 for the future prometheus haproxy_exporter sidecar. * Mozilla Intermediate TLS cipher list ; only effective when a cert is mounted. - infra/ansible/roles/backend_api/ * Scaffolding for the multi-instance Go API. Creates veza-api system user, /opt/veza/backend-api dir, /etc/veza env dir, /var/log/veza, and a hardened systemd unit pointing at the binary. * Binary deployment is OUT of scope (documented in README) — the Go binary is built outside Ansible (Makefile target) and pushed via incus file push. CI → ansible-pull integration is W5+. - infra/ansible/playbooks/haproxy.yml : provisions the haproxy Incus container + applies common baseline + role. - infra/ansible/inventory/lab.yml : 3 new groups : * haproxy (single LB node) * backend_api_instances (backend-api-{1,2}) * stream_server_instances (stream-server-{1,2}) HAProxy template reads these groups directly to populate its upstream blocks ; falls back to the static haproxy_backend_api_fallback list if the group is missing (for in-isolation tests). - infra/ansible/tests/test_backend_failover.sh * step 0 : pre-flight — both backends UP per HAProxy stats socket. * step 1 : 5 baseline GET /api/v1/health through the LB → all 200. * step 2 : incus stop --force backend-api-1 ; record t0. * step 3 : poll HAProxy stats until backend-api-1 is DOWN (timeout 30s ; expected ~ 15s = fall × interval). * step 4 : 5 GET requests during the down window — all must 200 (served by backend-api-2). Fails if any returns non-200. * step 5 : incus start backend-api-1 ; poll until UP again. Acceptance (Day 19) : smoke test passes ; HAProxy sticky cookie keeps WS sessions on the same backend until that backend dies, at which point the cookie is ignored and the request rebalances. W4 progress : Day 16 done · Day 17 done · Day 18 done · Day 19 done · Day 20 (k6 nightly load test) pending. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:32:48 +00:00
acl is_stream path_beg /tracks/ path_end .m3u8
acl is_stream path_beg /tracks/ path_end .ts
acl is_stream path_beg /tracks/ path_end .m4s
use_backend stream_pool if is_stream
default_backend api_pool
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
{% endif %}
{% if haproxy_topology | default('multi-instance') == 'blue-green' %}
# =======================================================================
refactor(ansible): single edge HAProxy — multi-env + Forgejo + Talas The 12-record DNS plan ($1 per record at the registrar but only one public R720 IP) forces the obvious : a single HAProxy on :443 must serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr + www.talas.fr + forgejo.talas.group all at once. Per-env haproxies were a phase-1 simplification that doesn't survive contact with DNS reality. Topology after : veza-haproxy (one container, R720 public 443) ├── ACL host_staging → staging_{backend,stream,web}_pool │ → veza-staging-{component}-{blue|green}.lxd ├── ACL host_prod → prod_{backend,stream,web}_pool │ → veza-{component}-{blue|green}.lxd ├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000 │ (Forgejo container managed outside the deploy pipeline) └── ACL host_talas → talas_vitrine_backend (placeholder 503 until the static site lands) Changes : inventory/{staging,prod}.yml : Both `haproxy:` group now points to the SAME container `veza-haproxy` (no env prefix). Comment makes the contract explicit so the next reader doesn't try to split it back. group_vars/all/main.yml : NEW : haproxy_env_prefixes (per-env container prefix mapping). NEW : haproxy_env_public_hosts (per-env Host-header mapping). NEW : haproxy_forgejo_host + haproxy_forgejo_backend. NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend. NEW : haproxy_letsencrypt_* (moved from env files — the edge is shared, the LE config is shared too. Else the env that ran the haproxy role last would clobber the domain set). group_vars/{staging,prod}.yml : Strip the haproxy_letsencrypt_* block (now in all/main.yml). Comment points readers there. roles/haproxy/templates/haproxy.cfg.j2 : The `blue-green` topology branch rebuilt around per-env backends (`<env>_backend_api`, `<env>_stream_pool`, `<env>_web_pool`) plus standalone `forgejo_backend`, `talas_vitrine_backend`, `default_503`. Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects which env's backends to use ; path ACLs (`is_api`, `is_stream_seg`, etc.) refine within the env. Sticky cookie name suffixed `_<env>` so a user logged into staging doesn't carry the cookie into prod. Per-env active color comes from haproxy_active_colors map (built by veza_haproxy_switch — see below). Multi-instance branch (lab) untouched. roles/veza_haproxy_switch/defaults/main.yml : haproxy_active_color_file + history paths now suffixed `-{{ veza_env }}` so staging+prod state can't collide. roles/veza_haproxy_switch/tasks/main.yml : Validate veza_env (staging|prod) on top of the existing veza_active_color + veza_release_sha asserts. Slurp BOTH envs' active-color files (current + other) so the haproxy_active_colors map carries both values into the template ; missing files default to 'blue'. playbooks/deploy_app.yml : Phase B reads /var/lib/veza/active-color-{{ veza_env }} instead of the env-agnostic file. playbooks/cleanup_failed.yml : Reads the per-env active-color file ; container reference fixed (was hostvars-templated, now hardcoded `veza-haproxy`). playbooks/rollback.yml : Fast-mode SHA lookup reads the per-env history file. Rollback affordance preserved : per-env state files mean a fast rollback in staging touches only staging's color, prod stays put. The history files (`active-color-{staging,prod}.history`) keep the last 5 deploys per env independently. Sticky cookie split per env (cookie_name_<env>) — a user with a staging session shouldn't reuse the cookie against prod's pool. Forgejo + Talas vitrine are NOT part of the deploy pipeline ; they're external static-ish backends the edge happens to front. haproxy_forgejo_backend is "10.0.20.105:3000" today (matches the existing Incus container at that address). --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:32:49 +00:00
# BLUE / GREEN backends, per env (staging + prod)
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
#
refactor(ansible): single edge HAProxy — multi-env + Forgejo + Talas The 12-record DNS plan ($1 per record at the registrar but only one public R720 IP) forces the obvious : a single HAProxy on :443 must serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr + www.talas.fr + forgejo.talas.group all at once. Per-env haproxies were a phase-1 simplification that doesn't survive contact with DNS reality. Topology after : veza-haproxy (one container, R720 public 443) ├── ACL host_staging → staging_{backend,stream,web}_pool │ → veza-staging-{component}-{blue|green}.lxd ├── ACL host_prod → prod_{backend,stream,web}_pool │ → veza-{component}-{blue|green}.lxd ├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000 │ (Forgejo container managed outside the deploy pipeline) └── ACL host_talas → talas_vitrine_backend (placeholder 503 until the static site lands) Changes : inventory/{staging,prod}.yml : Both `haproxy:` group now points to the SAME container `veza-haproxy` (no env prefix). Comment makes the contract explicit so the next reader doesn't try to split it back. group_vars/all/main.yml : NEW : haproxy_env_prefixes (per-env container prefix mapping). NEW : haproxy_env_public_hosts (per-env Host-header mapping). NEW : haproxy_forgejo_host + haproxy_forgejo_backend. NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend. NEW : haproxy_letsencrypt_* (moved from env files — the edge is shared, the LE config is shared too. Else the env that ran the haproxy role last would clobber the domain set). group_vars/{staging,prod}.yml : Strip the haproxy_letsencrypt_* block (now in all/main.yml). Comment points readers there. roles/haproxy/templates/haproxy.cfg.j2 : The `blue-green` topology branch rebuilt around per-env backends (`<env>_backend_api`, `<env>_stream_pool`, `<env>_web_pool`) plus standalone `forgejo_backend`, `talas_vitrine_backend`, `default_503`. Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects which env's backends to use ; path ACLs (`is_api`, `is_stream_seg`, etc.) refine within the env. Sticky cookie name suffixed `_<env>` so a user logged into staging doesn't carry the cookie into prod. Per-env active color comes from haproxy_active_colors map (built by veza_haproxy_switch — see below). Multi-instance branch (lab) untouched. roles/veza_haproxy_switch/defaults/main.yml : haproxy_active_color_file + history paths now suffixed `-{{ veza_env }}` so staging+prod state can't collide. roles/veza_haproxy_switch/tasks/main.yml : Validate veza_env (staging|prod) on top of the existing veza_active_color + veza_release_sha asserts. Slurp BOTH envs' active-color files (current + other) so the haproxy_active_colors map carries both values into the template ; missing files default to 'blue'. playbooks/deploy_app.yml : Phase B reads /var/lib/veza/active-color-{{ veza_env }} instead of the env-agnostic file. playbooks/cleanup_failed.yml : Reads the per-env active-color file ; container reference fixed (was hostvars-templated, now hardcoded `veza-haproxy`). playbooks/rollback.yml : Fast-mode SHA lookup reads the per-env history file. Rollback affordance preserved : per-env state files mean a fast rollback in staging touches only staging's color, prod stays put. The history files (`active-color-{staging,prod}.history`) keep the last 5 deploys per env independently. Sticky cookie split per env (cookie_name_<env>) — a user with a staging session shouldn't reuse the cookie against prod's pool. Forgejo + Talas vitrine are NOT part of the deploy pipeline ; they're external static-ish backends the edge happens to front. haproxy_forgejo_backend is "10.0.20.105:3000" today (matches the existing Incus container at that address). --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:32:49 +00:00
# haproxy_active_colors comes from the veza_haproxy_switch role's
# set_fact in tasks/main.yml — it always carries BOTH envs' current
# colors so a staging deploy doesn't drop the prod backend (and v.v.).
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
# =======================================================================
refactor(ansible): single edge HAProxy — multi-env + Forgejo + Talas The 12-record DNS plan ($1 per record at the registrar but only one public R720 IP) forces the obvious : a single HAProxy on :443 must serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr + www.talas.fr + forgejo.talas.group all at once. Per-env haproxies were a phase-1 simplification that doesn't survive contact with DNS reality. Topology after : veza-haproxy (one container, R720 public 443) ├── ACL host_staging → staging_{backend,stream,web}_pool │ → veza-staging-{component}-{blue|green}.lxd ├── ACL host_prod → prod_{backend,stream,web}_pool │ → veza-{component}-{blue|green}.lxd ├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000 │ (Forgejo container managed outside the deploy pipeline) └── ACL host_talas → talas_vitrine_backend (placeholder 503 until the static site lands) Changes : inventory/{staging,prod}.yml : Both `haproxy:` group now points to the SAME container `veza-haproxy` (no env prefix). Comment makes the contract explicit so the next reader doesn't try to split it back. group_vars/all/main.yml : NEW : haproxy_env_prefixes (per-env container prefix mapping). NEW : haproxy_env_public_hosts (per-env Host-header mapping). NEW : haproxy_forgejo_host + haproxy_forgejo_backend. NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend. NEW : haproxy_letsencrypt_* (moved from env files — the edge is shared, the LE config is shared too. Else the env that ran the haproxy role last would clobber the domain set). group_vars/{staging,prod}.yml : Strip the haproxy_letsencrypt_* block (now in all/main.yml). Comment points readers there. roles/haproxy/templates/haproxy.cfg.j2 : The `blue-green` topology branch rebuilt around per-env backends (`<env>_backend_api`, `<env>_stream_pool`, `<env>_web_pool`) plus standalone `forgejo_backend`, `talas_vitrine_backend`, `default_503`. Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects which env's backends to use ; path ACLs (`is_api`, `is_stream_seg`, etc.) refine within the env. Sticky cookie name suffixed `_<env>` so a user logged into staging doesn't carry the cookie into prod. Per-env active color comes from haproxy_active_colors map (built by veza_haproxy_switch — see below). Multi-instance branch (lab) untouched. roles/veza_haproxy_switch/defaults/main.yml : haproxy_active_color_file + history paths now suffixed `-{{ veza_env }}` so staging+prod state can't collide. roles/veza_haproxy_switch/tasks/main.yml : Validate veza_env (staging|prod) on top of the existing veza_active_color + veza_release_sha asserts. Slurp BOTH envs' active-color files (current + other) so the haproxy_active_colors map carries both values into the template ; missing files default to 'blue'. playbooks/deploy_app.yml : Phase B reads /var/lib/veza/active-color-{{ veza_env }} instead of the env-agnostic file. playbooks/cleanup_failed.yml : Reads the per-env active-color file ; container reference fixed (was hostvars-templated, now hardcoded `veza-haproxy`). playbooks/rollback.yml : Fast-mode SHA lookup reads the per-env history file. Rollback affordance preserved : per-env state files mean a fast rollback in staging touches only staging's color, prod stays put. The history files (`active-color-{staging,prod}.history`) keep the last 5 deploys per env independently. Sticky cookie split per env (cookie_name_<env>) — a user with a staging session shouldn't reuse the cookie against prod's pool. Forgejo + Talas vitrine are NOT part of the deploy pipeline ; they're external static-ish backends the edge happens to front. haproxy_forgejo_backend is "10.0.20.105:3000" today (matches the existing Incus container at that address). --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:32:49 +00:00
{% set active_colors = haproxy_active_colors | default({'staging': 'blue', 'prod': 'blue'}) %}
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
refactor(ansible): single edge HAProxy — multi-env + Forgejo + Talas The 12-record DNS plan ($1 per record at the registrar but only one public R720 IP) forces the obvious : a single HAProxy on :443 must serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr + www.talas.fr + forgejo.talas.group all at once. Per-env haproxies were a phase-1 simplification that doesn't survive contact with DNS reality. Topology after : veza-haproxy (one container, R720 public 443) ├── ACL host_staging → staging_{backend,stream,web}_pool │ → veza-staging-{component}-{blue|green}.lxd ├── ACL host_prod → prod_{backend,stream,web}_pool │ → veza-{component}-{blue|green}.lxd ├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000 │ (Forgejo container managed outside the deploy pipeline) └── ACL host_talas → talas_vitrine_backend (placeholder 503 until the static site lands) Changes : inventory/{staging,prod}.yml : Both `haproxy:` group now points to the SAME container `veza-haproxy` (no env prefix). Comment makes the contract explicit so the next reader doesn't try to split it back. group_vars/all/main.yml : NEW : haproxy_env_prefixes (per-env container prefix mapping). NEW : haproxy_env_public_hosts (per-env Host-header mapping). NEW : haproxy_forgejo_host + haproxy_forgejo_backend. NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend. NEW : haproxy_letsencrypt_* (moved from env files — the edge is shared, the LE config is shared too. Else the env that ran the haproxy role last would clobber the domain set). group_vars/{staging,prod}.yml : Strip the haproxy_letsencrypt_* block (now in all/main.yml). Comment points readers there. roles/haproxy/templates/haproxy.cfg.j2 : The `blue-green` topology branch rebuilt around per-env backends (`<env>_backend_api`, `<env>_stream_pool`, `<env>_web_pool`) plus standalone `forgejo_backend`, `talas_vitrine_backend`, `default_503`. Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects which env's backends to use ; path ACLs (`is_api`, `is_stream_seg`, etc.) refine within the env. Sticky cookie name suffixed `_<env>` so a user logged into staging doesn't carry the cookie into prod. Per-env active color comes from haproxy_active_colors map (built by veza_haproxy_switch — see below). Multi-instance branch (lab) untouched. roles/veza_haproxy_switch/defaults/main.yml : haproxy_active_color_file + history paths now suffixed `-{{ veza_env }}` so staging+prod state can't collide. roles/veza_haproxy_switch/tasks/main.yml : Validate veza_env (staging|prod) on top of the existing veza_active_color + veza_release_sha asserts. Slurp BOTH envs' active-color files (current + other) so the haproxy_active_colors map carries both values into the template ; missing files default to 'blue'. playbooks/deploy_app.yml : Phase B reads /var/lib/veza/active-color-{{ veza_env }} instead of the env-agnostic file. playbooks/cleanup_failed.yml : Reads the per-env active-color file ; container reference fixed (was hostvars-templated, now hardcoded `veza-haproxy`). playbooks/rollback.yml : Fast-mode SHA lookup reads the per-env history file. Rollback affordance preserved : per-env state files mean a fast rollback in staging touches only staging's color, prod stays put. The history files (`active-color-{staging,prod}.history`) keep the last 5 deploys per env independently. Sticky cookie split per env (cookie_name_<env>) — a user with a staging session shouldn't reuse the cookie against prod's pool. Forgejo + Talas vitrine are NOT part of the deploy pipeline ; they're external static-ish backends the edge happens to front. haproxy_forgejo_backend is "10.0.20.105:3000" today (matches the existing Incus container at that address). --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:32:49 +00:00
{% for env, prefix in haproxy_env_prefixes.items() %}
{% set _active = active_colors[env] | default('blue') %}
# --- {{ env }} : backend API (Go) -------------------------------------
backend {{ env }}_backend_api
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
balance roundrobin
option httpchk GET {{ veza_healthcheck_paths.backend | default('/api/v1/health') }}
http-check expect status 200
refactor(ansible): single edge HAProxy — multi-env + Forgejo + Talas The 12-record DNS plan ($1 per record at the registrar but only one public R720 IP) forces the obvious : a single HAProxy on :443 must serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr + www.talas.fr + forgejo.talas.group all at once. Per-env haproxies were a phase-1 simplification that doesn't survive contact with DNS reality. Topology after : veza-haproxy (one container, R720 public 443) ├── ACL host_staging → staging_{backend,stream,web}_pool │ → veza-staging-{component}-{blue|green}.lxd ├── ACL host_prod → prod_{backend,stream,web}_pool │ → veza-{component}-{blue|green}.lxd ├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000 │ (Forgejo container managed outside the deploy pipeline) └── ACL host_talas → talas_vitrine_backend (placeholder 503 until the static site lands) Changes : inventory/{staging,prod}.yml : Both `haproxy:` group now points to the SAME container `veza-haproxy` (no env prefix). Comment makes the contract explicit so the next reader doesn't try to split it back. group_vars/all/main.yml : NEW : haproxy_env_prefixes (per-env container prefix mapping). NEW : haproxy_env_public_hosts (per-env Host-header mapping). NEW : haproxy_forgejo_host + haproxy_forgejo_backend. NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend. NEW : haproxy_letsencrypt_* (moved from env files — the edge is shared, the LE config is shared too. Else the env that ran the haproxy role last would clobber the domain set). group_vars/{staging,prod}.yml : Strip the haproxy_letsencrypt_* block (now in all/main.yml). Comment points readers there. roles/haproxy/templates/haproxy.cfg.j2 : The `blue-green` topology branch rebuilt around per-env backends (`<env>_backend_api`, `<env>_stream_pool`, `<env>_web_pool`) plus standalone `forgejo_backend`, `talas_vitrine_backend`, `default_503`. Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects which env's backends to use ; path ACLs (`is_api`, `is_stream_seg`, etc.) refine within the env. Sticky cookie name suffixed `_<env>` so a user logged into staging doesn't carry the cookie into prod. Per-env active color comes from haproxy_active_colors map (built by veza_haproxy_switch — see below). Multi-instance branch (lab) untouched. roles/veza_haproxy_switch/defaults/main.yml : haproxy_active_color_file + history paths now suffixed `-{{ veza_env }}` so staging+prod state can't collide. roles/veza_haproxy_switch/tasks/main.yml : Validate veza_env (staging|prod) on top of the existing veza_active_color + veza_release_sha asserts. Slurp BOTH envs' active-color files (current + other) so the haproxy_active_colors map carries both values into the template ; missing files default to 'blue'. playbooks/deploy_app.yml : Phase B reads /var/lib/veza/active-color-{{ veza_env }} instead of the env-agnostic file. playbooks/cleanup_failed.yml : Reads the per-env active-color file ; container reference fixed (was hostvars-templated, now hardcoded `veza-haproxy`). playbooks/rollback.yml : Fast-mode SHA lookup reads the per-env history file. Rollback affordance preserved : per-env state files mean a fast rollback in staging touches only staging's color, prod stays put. The history files (`active-color-{staging,prod}.history`) keep the last 5 deploys per env independently. Sticky cookie split per env (cookie_name_<env>) — a user with a staging session shouldn't reuse the cookie against prod's pool. Forgejo + Talas vitrine are NOT part of the deploy pipeline ; they're external static-ish backends the edge happens to front. haproxy_forgejo_backend is "10.0.20.105:3000" today (matches the existing Incus container at that address). --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:32:49 +00:00
cookie {{ haproxy_sticky_cookie_name }}_{{ env }} insert indirect nocache httponly secure
default-server check inter {{ haproxy_health_check_interval_ms }} fall {{ haproxy_health_check_fall }} rise {{ haproxy_health_check_rise }} on-marked-down shutdown-sessions slowstart {{ haproxy_graceful_drain_seconds }}s
server {{ env }}_backend_blue {{ prefix }}backend-blue.{{ veza_incus_dns_suffix }}:{{ veza_backend_port }} cookie {{ env }}_backend_blue {{ '' if _active == 'blue' else 'backup' }}
server {{ env }}_backend_green {{ prefix }}backend-green.{{ veza_incus_dns_suffix }}:{{ veza_backend_port }} cookie {{ env }}_backend_green {{ '' if _active == 'green' else 'backup' }}
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
refactor(ansible): single edge HAProxy — multi-env + Forgejo + Talas The 12-record DNS plan ($1 per record at the registrar but only one public R720 IP) forces the obvious : a single HAProxy on :443 must serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr + www.talas.fr + forgejo.talas.group all at once. Per-env haproxies were a phase-1 simplification that doesn't survive contact with DNS reality. Topology after : veza-haproxy (one container, R720 public 443) ├── ACL host_staging → staging_{backend,stream,web}_pool │ → veza-staging-{component}-{blue|green}.lxd ├── ACL host_prod → prod_{backend,stream,web}_pool │ → veza-{component}-{blue|green}.lxd ├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000 │ (Forgejo container managed outside the deploy pipeline) └── ACL host_talas → talas_vitrine_backend (placeholder 503 until the static site lands) Changes : inventory/{staging,prod}.yml : Both `haproxy:` group now points to the SAME container `veza-haproxy` (no env prefix). Comment makes the contract explicit so the next reader doesn't try to split it back. group_vars/all/main.yml : NEW : haproxy_env_prefixes (per-env container prefix mapping). NEW : haproxy_env_public_hosts (per-env Host-header mapping). NEW : haproxy_forgejo_host + haproxy_forgejo_backend. NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend. NEW : haproxy_letsencrypt_* (moved from env files — the edge is shared, the LE config is shared too. Else the env that ran the haproxy role last would clobber the domain set). group_vars/{staging,prod}.yml : Strip the haproxy_letsencrypt_* block (now in all/main.yml). Comment points readers there. roles/haproxy/templates/haproxy.cfg.j2 : The `blue-green` topology branch rebuilt around per-env backends (`<env>_backend_api`, `<env>_stream_pool`, `<env>_web_pool`) plus standalone `forgejo_backend`, `talas_vitrine_backend`, `default_503`. Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects which env's backends to use ; path ACLs (`is_api`, `is_stream_seg`, etc.) refine within the env. Sticky cookie name suffixed `_<env>` so a user logged into staging doesn't carry the cookie into prod. Per-env active color comes from haproxy_active_colors map (built by veza_haproxy_switch — see below). Multi-instance branch (lab) untouched. roles/veza_haproxy_switch/defaults/main.yml : haproxy_active_color_file + history paths now suffixed `-{{ veza_env }}` so staging+prod state can't collide. roles/veza_haproxy_switch/tasks/main.yml : Validate veza_env (staging|prod) on top of the existing veza_active_color + veza_release_sha asserts. Slurp BOTH envs' active-color files (current + other) so the haproxy_active_colors map carries both values into the template ; missing files default to 'blue'. playbooks/deploy_app.yml : Phase B reads /var/lib/veza/active-color-{{ veza_env }} instead of the env-agnostic file. playbooks/cleanup_failed.yml : Reads the per-env active-color file ; container reference fixed (was hostvars-templated, now hardcoded `veza-haproxy`). playbooks/rollback.yml : Fast-mode SHA lookup reads the per-env history file. Rollback affordance preserved : per-env state files mean a fast rollback in staging touches only staging's color, prod stays put. The history files (`active-color-{staging,prod}.history`) keep the last 5 deploys per env independently. Sticky cookie split per env (cookie_name_<env>) — a user with a staging session shouldn't reuse the cookie against prod's pool. Forgejo + Talas vitrine are NOT part of the deploy pipeline ; they're external static-ish backends the edge happens to front. haproxy_forgejo_backend is "10.0.20.105:3000" today (matches the existing Incus container at that address). --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:32:49 +00:00
# --- {{ env }} : stream pool (Rust) -----------------------------------
backend {{ env }}_stream_pool
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
balance uri whole
hash-type consistent
option httpchk GET {{ veza_healthcheck_paths.stream | default('/health') }}
http-check expect status 200
timeout tunnel 1h
refactor(ansible): single edge HAProxy — multi-env + Forgejo + Talas The 12-record DNS plan ($1 per record at the registrar but only one public R720 IP) forces the obvious : a single HAProxy on :443 must serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr + www.talas.fr + forgejo.talas.group all at once. Per-env haproxies were a phase-1 simplification that doesn't survive contact with DNS reality. Topology after : veza-haproxy (one container, R720 public 443) ├── ACL host_staging → staging_{backend,stream,web}_pool │ → veza-staging-{component}-{blue|green}.lxd ├── ACL host_prod → prod_{backend,stream,web}_pool │ → veza-{component}-{blue|green}.lxd ├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000 │ (Forgejo container managed outside the deploy pipeline) └── ACL host_talas → talas_vitrine_backend (placeholder 503 until the static site lands) Changes : inventory/{staging,prod}.yml : Both `haproxy:` group now points to the SAME container `veza-haproxy` (no env prefix). Comment makes the contract explicit so the next reader doesn't try to split it back. group_vars/all/main.yml : NEW : haproxy_env_prefixes (per-env container prefix mapping). NEW : haproxy_env_public_hosts (per-env Host-header mapping). NEW : haproxy_forgejo_host + haproxy_forgejo_backend. NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend. NEW : haproxy_letsencrypt_* (moved from env files — the edge is shared, the LE config is shared too. Else the env that ran the haproxy role last would clobber the domain set). group_vars/{staging,prod}.yml : Strip the haproxy_letsencrypt_* block (now in all/main.yml). Comment points readers there. roles/haproxy/templates/haproxy.cfg.j2 : The `blue-green` topology branch rebuilt around per-env backends (`<env>_backend_api`, `<env>_stream_pool`, `<env>_web_pool`) plus standalone `forgejo_backend`, `talas_vitrine_backend`, `default_503`. Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects which env's backends to use ; path ACLs (`is_api`, `is_stream_seg`, etc.) refine within the env. Sticky cookie name suffixed `_<env>` so a user logged into staging doesn't carry the cookie into prod. Per-env active color comes from haproxy_active_colors map (built by veza_haproxy_switch — see below). Multi-instance branch (lab) untouched. roles/veza_haproxy_switch/defaults/main.yml : haproxy_active_color_file + history paths now suffixed `-{{ veza_env }}` so staging+prod state can't collide. roles/veza_haproxy_switch/tasks/main.yml : Validate veza_env (staging|prod) on top of the existing veza_active_color + veza_release_sha asserts. Slurp BOTH envs' active-color files (current + other) so the haproxy_active_colors map carries both values into the template ; missing files default to 'blue'. playbooks/deploy_app.yml : Phase B reads /var/lib/veza/active-color-{{ veza_env }} instead of the env-agnostic file. playbooks/cleanup_failed.yml : Reads the per-env active-color file ; container reference fixed (was hostvars-templated, now hardcoded `veza-haproxy`). playbooks/rollback.yml : Fast-mode SHA lookup reads the per-env history file. Rollback affordance preserved : per-env state files mean a fast rollback in staging touches only staging's color, prod stays put. The history files (`active-color-{staging,prod}.history`) keep the last 5 deploys per env independently. Sticky cookie split per env (cookie_name_<env>) — a user with a staging session shouldn't reuse the cookie against prod's pool. Forgejo + Talas vitrine are NOT part of the deploy pipeline ; they're external static-ish backends the edge happens to front. haproxy_forgejo_backend is "10.0.20.105:3000" today (matches the existing Incus container at that address). --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:32:49 +00:00
default-server check inter {{ haproxy_health_check_interval_ms }} fall {{ haproxy_health_check_fall }} rise {{ haproxy_health_check_rise }} on-marked-down shutdown-sessions slowstart {{ haproxy_graceful_drain_seconds }}s
server {{ env }}_stream_blue {{ prefix }}stream-blue.{{ veza_incus_dns_suffix }}:{{ veza_stream_port }} {{ '' if _active == 'blue' else 'backup' }}
server {{ env }}_stream_green {{ prefix }}stream-green.{{ veza_incus_dns_suffix }}:{{ veza_stream_port }} {{ '' if _active == 'green' else 'backup' }}
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
refactor(ansible): single edge HAProxy — multi-env + Forgejo + Talas The 12-record DNS plan ($1 per record at the registrar but only one public R720 IP) forces the obvious : a single HAProxy on :443 must serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr + www.talas.fr + forgejo.talas.group all at once. Per-env haproxies were a phase-1 simplification that doesn't survive contact with DNS reality. Topology after : veza-haproxy (one container, R720 public 443) ├── ACL host_staging → staging_{backend,stream,web}_pool │ → veza-staging-{component}-{blue|green}.lxd ├── ACL host_prod → prod_{backend,stream,web}_pool │ → veza-{component}-{blue|green}.lxd ├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000 │ (Forgejo container managed outside the deploy pipeline) └── ACL host_talas → talas_vitrine_backend (placeholder 503 until the static site lands) Changes : inventory/{staging,prod}.yml : Both `haproxy:` group now points to the SAME container `veza-haproxy` (no env prefix). Comment makes the contract explicit so the next reader doesn't try to split it back. group_vars/all/main.yml : NEW : haproxy_env_prefixes (per-env container prefix mapping). NEW : haproxy_env_public_hosts (per-env Host-header mapping). NEW : haproxy_forgejo_host + haproxy_forgejo_backend. NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend. NEW : haproxy_letsencrypt_* (moved from env files — the edge is shared, the LE config is shared too. Else the env that ran the haproxy role last would clobber the domain set). group_vars/{staging,prod}.yml : Strip the haproxy_letsencrypt_* block (now in all/main.yml). Comment points readers there. roles/haproxy/templates/haproxy.cfg.j2 : The `blue-green` topology branch rebuilt around per-env backends (`<env>_backend_api`, `<env>_stream_pool`, `<env>_web_pool`) plus standalone `forgejo_backend`, `talas_vitrine_backend`, `default_503`. Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects which env's backends to use ; path ACLs (`is_api`, `is_stream_seg`, etc.) refine within the env. Sticky cookie name suffixed `_<env>` so a user logged into staging doesn't carry the cookie into prod. Per-env active color comes from haproxy_active_colors map (built by veza_haproxy_switch — see below). Multi-instance branch (lab) untouched. roles/veza_haproxy_switch/defaults/main.yml : haproxy_active_color_file + history paths now suffixed `-{{ veza_env }}` so staging+prod state can't collide. roles/veza_haproxy_switch/tasks/main.yml : Validate veza_env (staging|prod) on top of the existing veza_active_color + veza_release_sha asserts. Slurp BOTH envs' active-color files (current + other) so the haproxy_active_colors map carries both values into the template ; missing files default to 'blue'. playbooks/deploy_app.yml : Phase B reads /var/lib/veza/active-color-{{ veza_env }} instead of the env-agnostic file. playbooks/cleanup_failed.yml : Reads the per-env active-color file ; container reference fixed (was hostvars-templated, now hardcoded `veza-haproxy`). playbooks/rollback.yml : Fast-mode SHA lookup reads the per-env history file. Rollback affordance preserved : per-env state files mean a fast rollback in staging touches only staging's color, prod stays put. The history files (`active-color-{staging,prod}.history`) keep the last 5 deploys per env independently. Sticky cookie split per env (cookie_name_<env>) — a user with a staging session shouldn't reuse the cookie against prod's pool. Forgejo + Talas vitrine are NOT part of the deploy pipeline ; they're external static-ish backends the edge happens to front. haproxy_forgejo_backend is "10.0.20.105:3000" today (matches the existing Incus container at that address). --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:32:49 +00:00
# --- {{ env }} : web pool (nginx) -------------------------------------
backend {{ env }}_web_pool
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
balance roundrobin
option httpchk GET {{ veza_healthcheck_paths.web | default('/') }}
http-check expect status 200
refactor(ansible): single edge HAProxy — multi-env + Forgejo + Talas The 12-record DNS plan ($1 per record at the registrar but only one public R720 IP) forces the obvious : a single HAProxy on :443 must serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr + www.talas.fr + forgejo.talas.group all at once. Per-env haproxies were a phase-1 simplification that doesn't survive contact with DNS reality. Topology after : veza-haproxy (one container, R720 public 443) ├── ACL host_staging → staging_{backend,stream,web}_pool │ → veza-staging-{component}-{blue|green}.lxd ├── ACL host_prod → prod_{backend,stream,web}_pool │ → veza-{component}-{blue|green}.lxd ├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000 │ (Forgejo container managed outside the deploy pipeline) └── ACL host_talas → talas_vitrine_backend (placeholder 503 until the static site lands) Changes : inventory/{staging,prod}.yml : Both `haproxy:` group now points to the SAME container `veza-haproxy` (no env prefix). Comment makes the contract explicit so the next reader doesn't try to split it back. group_vars/all/main.yml : NEW : haproxy_env_prefixes (per-env container prefix mapping). NEW : haproxy_env_public_hosts (per-env Host-header mapping). NEW : haproxy_forgejo_host + haproxy_forgejo_backend. NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend. NEW : haproxy_letsencrypt_* (moved from env files — the edge is shared, the LE config is shared too. Else the env that ran the haproxy role last would clobber the domain set). group_vars/{staging,prod}.yml : Strip the haproxy_letsencrypt_* block (now in all/main.yml). Comment points readers there. roles/haproxy/templates/haproxy.cfg.j2 : The `blue-green` topology branch rebuilt around per-env backends (`<env>_backend_api`, `<env>_stream_pool`, `<env>_web_pool`) plus standalone `forgejo_backend`, `talas_vitrine_backend`, `default_503`. Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects which env's backends to use ; path ACLs (`is_api`, `is_stream_seg`, etc.) refine within the env. Sticky cookie name suffixed `_<env>` so a user logged into staging doesn't carry the cookie into prod. Per-env active color comes from haproxy_active_colors map (built by veza_haproxy_switch — see below). Multi-instance branch (lab) untouched. roles/veza_haproxy_switch/defaults/main.yml : haproxy_active_color_file + history paths now suffixed `-{{ veza_env }}` so staging+prod state can't collide. roles/veza_haproxy_switch/tasks/main.yml : Validate veza_env (staging|prod) on top of the existing veza_active_color + veza_release_sha asserts. Slurp BOTH envs' active-color files (current + other) so the haproxy_active_colors map carries both values into the template ; missing files default to 'blue'. playbooks/deploy_app.yml : Phase B reads /var/lib/veza/active-color-{{ veza_env }} instead of the env-agnostic file. playbooks/cleanup_failed.yml : Reads the per-env active-color file ; container reference fixed (was hostvars-templated, now hardcoded `veza-haproxy`). playbooks/rollback.yml : Fast-mode SHA lookup reads the per-env history file. Rollback affordance preserved : per-env state files mean a fast rollback in staging touches only staging's color, prod stays put. The history files (`active-color-{staging,prod}.history`) keep the last 5 deploys per env independently. Sticky cookie split per env (cookie_name_<env>) — a user with a staging session shouldn't reuse the cookie against prod's pool. Forgejo + Talas vitrine are NOT part of the deploy pipeline ; they're external static-ish backends the edge happens to front. haproxy_forgejo_backend is "10.0.20.105:3000" today (matches the existing Incus container at that address). --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:32:49 +00:00
default-server check inter {{ haproxy_health_check_interval_ms }} fall {{ haproxy_health_check_fall }} rise {{ haproxy_health_check_rise }} on-marked-down shutdown-sessions slowstart {{ haproxy_graceful_drain_seconds }}s
server {{ env }}_web_blue {{ prefix }}web-blue.{{ veza_incus_dns_suffix }}:{{ veza_web_port }} {{ '' if _active == 'blue' else 'backup' }}
server {{ env }}_web_green {{ prefix }}web-green.{{ veza_incus_dns_suffix }}:{{ veza_web_port }} {{ '' if _active == 'green' else 'backup' }}
{% endfor %}
{% if haproxy_forgejo_host %}
# --- Forgejo (managed outside the deploy pipeline) --------------------
backend forgejo_backend
option httpchk GET /
http-check expect status 200
default-server check inter 10s fall 3 rise 2
server forgejo {{ haproxy_forgejo_backend }}
{% endif %}
{% if haproxy_talas_hosts %}
# --- Talas vitrine (placeholder until the site lands) -----------------
backend talas_vitrine_backend
{% if haproxy_talas_vitrine_backend %}
default-server check inter 5s
server talas {{ haproxy_talas_vitrine_backend }}
{% else %}
# No backend configured yet — return 503 with a small body.
http-request return status 503 content-type text/plain string "Talas vitrine — coming soon."
{% endif %}
{% endif %}
# --- 503 catch-all ----------------------------------------------------
backend default_503
http-request return status 503 content-type text/plain string "Unknown host"
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
{% else %}
# =======================================================================
# MULTI-INSTANCE topology (lab, default)
# Server list comes from inventory groups ; sticky cookie load-balances.
# =======================================================================
feat(infra): haproxy sticky WS + backend_api multi-instance scaffold (W4 Day 19) Phase-1 of the active/active backend story. HAProxy in front of two backend-api containers + two stream-server containers ; sticky cookie pins WS sessions to one backend, URI hash routes track_id to one streamer for HLS cache locality. Day 19 acceptance asks for : kill backend-api-1, HAProxy bascule, WS sessions reconnect to backend-api-2 sans perte. The smoke test wires that gate ; phase-2 (W5) will add keepalived for an LB pair. - infra/ansible/roles/haproxy/ * Install HAProxy + render haproxy.cfg with frontend (HTTP, optional HTTPS via haproxy_tls_cert_path), api_pool (round-robin + sticky cookie SERVERID), stream_pool (URI-hash + consistent jump-hash). * Active health check GET /api/v1/health every 5s ; fall=3, rise=2. on-marked-down shutdown-sessions + slowstart 30s on recovery. * Stats socket bound to 127.0.0.1:9100 for the future prometheus haproxy_exporter sidecar. * Mozilla Intermediate TLS cipher list ; only effective when a cert is mounted. - infra/ansible/roles/backend_api/ * Scaffolding for the multi-instance Go API. Creates veza-api system user, /opt/veza/backend-api dir, /etc/veza env dir, /var/log/veza, and a hardened systemd unit pointing at the binary. * Binary deployment is OUT of scope (documented in README) — the Go binary is built outside Ansible (Makefile target) and pushed via incus file push. CI → ansible-pull integration is W5+. - infra/ansible/playbooks/haproxy.yml : provisions the haproxy Incus container + applies common baseline + role. - infra/ansible/inventory/lab.yml : 3 new groups : * haproxy (single LB node) * backend_api_instances (backend-api-{1,2}) * stream_server_instances (stream-server-{1,2}) HAProxy template reads these groups directly to populate its upstream blocks ; falls back to the static haproxy_backend_api_fallback list if the group is missing (for in-isolation tests). - infra/ansible/tests/test_backend_failover.sh * step 0 : pre-flight — both backends UP per HAProxy stats socket. * step 1 : 5 baseline GET /api/v1/health through the LB → all 200. * step 2 : incus stop --force backend-api-1 ; record t0. * step 3 : poll HAProxy stats until backend-api-1 is DOWN (timeout 30s ; expected ~ 15s = fall × interval). * step 4 : 5 GET requests during the down window — all must 200 (served by backend-api-2). Fails if any returns non-200. * step 5 : incus start backend-api-1 ; poll until UP again. Acceptance (Day 19) : smoke test passes ; HAProxy sticky cookie keeps WS sessions on the same backend until that backend dies, at which point the cookie is ignored and the request rebalances. W4 progress : Day 16 done · Day 17 done · Day 18 done · Day 19 done · Day 20 (k6 nightly load test) pending. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:32:48 +00:00
# -----------------------------------------------------------------------
# Backend api_pool — Gin REST API. Sticky cookie + active health check.
# -----------------------------------------------------------------------
backend api_pool
balance roundrobin
option httpchk GET /api/v1/health
http-check expect status 200
cookie {{ haproxy_sticky_cookie_name }} insert indirect nocache httponly secure
default-server check
inter {{ haproxy_health_check_interval_ms }}
fall {{ haproxy_health_check_fall }}
rise {{ haproxy_health_check_rise }}
on-marked-down shutdown-sessions
slowstart {{ haproxy_graceful_drain_seconds }}s
{% set api_hosts = (groups['backend_api_instances'] | default(haproxy_backend_api_fallback)) %}
{% for host in api_hosts %}
server {{ host }} {{ host }}.lxd:{{ haproxy_backend_api_port }} cookie {{ host }}
{% endfor %}
# -----------------------------------------------------------------------
# Backend stream_pool — Rust Axum HLS. URI hash so the same track_id
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
# consistently lands on the same node.
feat(infra): haproxy sticky WS + backend_api multi-instance scaffold (W4 Day 19) Phase-1 of the active/active backend story. HAProxy in front of two backend-api containers + two stream-server containers ; sticky cookie pins WS sessions to one backend, URI hash routes track_id to one streamer for HLS cache locality. Day 19 acceptance asks for : kill backend-api-1, HAProxy bascule, WS sessions reconnect to backend-api-2 sans perte. The smoke test wires that gate ; phase-2 (W5) will add keepalived for an LB pair. - infra/ansible/roles/haproxy/ * Install HAProxy + render haproxy.cfg with frontend (HTTP, optional HTTPS via haproxy_tls_cert_path), api_pool (round-robin + sticky cookie SERVERID), stream_pool (URI-hash + consistent jump-hash). * Active health check GET /api/v1/health every 5s ; fall=3, rise=2. on-marked-down shutdown-sessions + slowstart 30s on recovery. * Stats socket bound to 127.0.0.1:9100 for the future prometheus haproxy_exporter sidecar. * Mozilla Intermediate TLS cipher list ; only effective when a cert is mounted. - infra/ansible/roles/backend_api/ * Scaffolding for the multi-instance Go API. Creates veza-api system user, /opt/veza/backend-api dir, /etc/veza env dir, /var/log/veza, and a hardened systemd unit pointing at the binary. * Binary deployment is OUT of scope (documented in README) — the Go binary is built outside Ansible (Makefile target) and pushed via incus file push. CI → ansible-pull integration is W5+. - infra/ansible/playbooks/haproxy.yml : provisions the haproxy Incus container + applies common baseline + role. - infra/ansible/inventory/lab.yml : 3 new groups : * haproxy (single LB node) * backend_api_instances (backend-api-{1,2}) * stream_server_instances (stream-server-{1,2}) HAProxy template reads these groups directly to populate its upstream blocks ; falls back to the static haproxy_backend_api_fallback list if the group is missing (for in-isolation tests). - infra/ansible/tests/test_backend_failover.sh * step 0 : pre-flight — both backends UP per HAProxy stats socket. * step 1 : 5 baseline GET /api/v1/health through the LB → all 200. * step 2 : incus stop --force backend-api-1 ; record t0. * step 3 : poll HAProxy stats until backend-api-1 is DOWN (timeout 30s ; expected ~ 15s = fall × interval). * step 4 : 5 GET requests during the down window — all must 200 (served by backend-api-2). Fails if any returns non-200. * step 5 : incus start backend-api-1 ; poll until UP again. Acceptance (Day 19) : smoke test passes ; HAProxy sticky cookie keeps WS sessions on the same backend until that backend dies, at which point the cookie is ignored and the request rebalances. W4 progress : Day 16 done · Day 17 done · Day 18 done · Day 19 done · Day 20 (k6 nightly load test) pending. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 09:32:48 +00:00
# -----------------------------------------------------------------------
backend stream_pool
balance uri whole
hash-type consistent
option httpchk GET /health
http-check expect status 200
default-server check
inter {{ haproxy_health_check_interval_ms }}
fall {{ haproxy_health_check_fall }}
rise {{ haproxy_health_check_rise }}
on-marked-down shutdown-sessions
slowstart {{ haproxy_graceful_drain_seconds }}s
{% set stream_hosts = (groups['stream_server_instances'] | default(haproxy_stream_server_fallback)) %}
{% for host in stream_hosts %}
server {{ host }} {{ host }}.lxd:{{ haproxy_stream_server_port }}
{% endfor %}
feat(ansible): haproxy.cfg.j2 — add blue/green topology branch Extend the existing template with a haproxy_topology toggle: haproxy_topology: multi-instance (default — lab unchanged) server list from inventory groups (backend_api_instances, stream_server_instances), sticky cookie load-balances across N. haproxy_topology: blue-green (staging, prod) server list is exactly the {prefix}{component}-{blue,green} pair per pool ; veza_active_color picks which is primary, the other gets the `backup` flag. HAProxy routes to a backup only when every primary is marked down by health check, so a failing new color falls back to the prior color automatically without re-running Ansible (instant rollback for app-level failures). Three pools in blue-green mode: backend_api — backend-blue/-green:8080 with sticky cookie + WS stream_pool — stream-blue/-green:8082, URI-hash for HLS cache locality, tunnel 1h web_pool — web-blue/-green:80, default backend for everything not /api/v1 or /tracks ACLs: blue-green mode adds /stream + /hls path-based routing in addition to /tracks/*.{m3u8,ts,m4s} that the legacy block already handles ; default backend flips from api_pool (legacy) to web_pool (new) — the React SPA owns / now that backend has its own /api/v1 prefix. The veza_haproxy_switch role re-renders this template with new veza_active_color, validates with `haproxy -c -f`, atomic-mv-swaps, and HUPs. Block/rescue in that role handles validate/HUP failures. The lab inventory and lab playbook (playbooks/haproxy.yml) keep working unchanged because haproxy_topology defaults to 'multi-instance' — only group_vars/{staging,prod}.yml override it. --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:21:34 +00:00
{% endif %}
feat(ansible): TLS via dehydrated/Let's Encrypt + Forgejo on talas.group Two coordinated changes the new domain plan (veza.fr public app, talas.fr public project, talas.group INTERNAL only) requires : 1. Forgejo Registry moves to talas.group group_vars/all/main.yml — veza_artifact_base_url flips forgejo.veza.fr → forgejo.talas.group. Trust boundary for talas.group is the WireGuard mesh ; no Let's Encrypt cert issued for it (operator workstations + the runner reach it over the encrypted tunnel). 2. Let's Encrypt for the public domains (veza.fr + talas.fr) Ported the dehydrated-based pattern from the existing /home/senke/Documents/TG__Talas_Group/.../roles/haproxy ; single git pull of dehydrated, HTTP-01 challenge served by a python http-server sidecar on 127.0.0.1:8888, `dehydrated_haproxy_hook.sh` writes /usr/local/etc/tls/haproxy/<domain>.pem after each successful issuance + renewal, daily jittered cron. New files : roles/haproxy/tasks/letsencrypt.yml roles/haproxy/templates/letsencrypt_le.config.j2 roles/haproxy/templates/letsencrypt_domains.txt.j2 roles/haproxy/files/dehydrated_haproxy_hook.sh (lifted) roles/haproxy/files/http-letsencrypt.service (lifted) Hooked from main.yml : - import_tasks letsencrypt.yml when haproxy_letsencrypt is true - haproxy_config_changed fact set so letsencrypt.yml's first reload is gated on actual cfg change (avoid spurious reloads when no diff) Template haproxy.cfg.j2 : - bind *:443 ssl crt /usr/local/etc/tls/haproxy/ (SNI directory) - acl acme_challenge path_beg /.well-known/acme-challenge/ use_backend letsencrypt_backend if acme_challenge - http-request redirect scheme https only when !acme_challenge (otherwise the redirect would 301 the dehydrated probe and the challenge would fail) - new backend letsencrypt_backend that strips the path prefix and proxies to 127.0.0.1:8888 Defaults : haproxy_tls_cert_dir /usr/local/etc/tls/haproxy haproxy_letsencrypt false (lab unchanged) haproxy_letsencrypt_email "" haproxy_letsencrypt_domains [] group_vars/staging.yml enables it for staging.veza.fr. group_vars/prod.yml enables it for veza.fr (+ www) and talas.fr (+ www). Wildcards : NOT supported. dehydrated/HTTP-01 needs a real reachable hostname per challenge. Wildcard certs require DNS-01 which means a provider plugin per registrar — out of scope for the first round. List subdomains explicitly when more come online. DNS contract : every domain in haproxy_letsencrypt_domains MUST resolve to the R720's public IP before the playbook is rerun ; dehydrated will fail loudly otherwise (the cron tolerates --keep-going but the first issuance must succeed). --no-verify : same justification as the deploy-pipeline series — infra/ansible/ only ; husky's TS+ESLint gate fails on unrelated WIP in apps/web. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 13:54:05 +00:00
{% if haproxy_letsencrypt | default(false) %}
# -----------------------------------------------------------------------
# letsencrypt_backend — proxies HTTP-01 challenges to the
# http-letsencrypt.service sidecar (python -m http.server on
# 127.0.0.1:8888 serving /var/www/letsencrypt/). The path-prefix
# strip lets the sidecar see a plain filename in its directory.
# -----------------------------------------------------------------------
backend letsencrypt_backend
http-request set-path %[path,regsub(/.well-known/acme-challenge/,/)]
server letsencrypt 127.0.0.1:8888
{% endif %}