refactor(ansible): single edge HAProxy — multi-env + Forgejo + Talas
The 12-record DNS plan ($1 per record at the registrar but only one
public R720 IP) forces the obvious : a single HAProxy on :443 must
serve staging.veza.fr + veza.fr + www.veza.fr + talas.fr +
www.talas.fr + forgejo.talas.group all at once. Per-env haproxies
were a phase-1 simplification that doesn't survive contact with
DNS reality.
Topology after :
veza-haproxy (one container, R720 public 443)
├── ACL host_staging → staging_{backend,stream,web}_pool
│ → veza-staging-{component}-{blue|green}.lxd
├── ACL host_prod → prod_{backend,stream,web}_pool
│ → veza-{component}-{blue|green}.lxd
├── ACL host_forgejo → forgejo_backend → 10.0.20.105:3000
│ (Forgejo container managed outside the deploy pipeline)
└── ACL host_talas → talas_vitrine_backend
(placeholder 503 until the static site lands)
Changes :
inventory/{staging,prod}.yml :
Both `haproxy:` group now points to the SAME container
`veza-haproxy` (no env prefix). Comment makes the contract
explicit so the next reader doesn't try to split it back.
group_vars/all/main.yml :
NEW : haproxy_env_prefixes (per-env container prefix mapping).
NEW : haproxy_env_public_hosts (per-env Host-header mapping).
NEW : haproxy_forgejo_host + haproxy_forgejo_backend.
NEW : haproxy_talas_hosts + haproxy_talas_vitrine_backend.
NEW : haproxy_letsencrypt_* (moved from env files — the edge
is shared, the LE config is shared too. Else the env
that ran the haproxy role last would clobber the
domain set).
group_vars/{staging,prod}.yml :
Strip the haproxy_letsencrypt_* block (now in all/main.yml).
Comment points readers there.
roles/haproxy/templates/haproxy.cfg.j2 :
The `blue-green` topology branch rebuilt around per-env
backends (`<env>_backend_api`, `<env>_stream_pool`,
`<env>_web_pool`) plus standalone `forgejo_backend`,
`talas_vitrine_backend`, `default_503`.
Frontend ACLs : `host_<env>` (hdr(host) -i ...) selects
which env's backends to use ; path ACLs (`is_api`,
`is_stream_seg`, etc.) refine within the env.
Sticky cookie name suffixed `_<env>` so a user logged
into staging doesn't carry the cookie into prod.
Per-env active color comes from haproxy_active_colors map
(built by veza_haproxy_switch — see below).
Multi-instance branch (lab) untouched.
roles/veza_haproxy_switch/defaults/main.yml :
haproxy_active_color_file + history paths now suffixed
`-{{ veza_env }}` so staging+prod state can't collide.
roles/veza_haproxy_switch/tasks/main.yml :
Validate veza_env (staging|prod) on top of the existing
veza_active_color + veza_release_sha asserts.
Slurp BOTH envs' active-color files (current + other) so
the haproxy_active_colors map carries both values into
the template ; missing files default to 'blue'.
playbooks/deploy_app.yml :
Phase B reads /var/lib/veza/active-color-{{ veza_env }}
instead of the env-agnostic file.
playbooks/cleanup_failed.yml :
Reads the per-env active-color file ; container reference
fixed (was hostvars-templated, now hardcoded `veza-haproxy`).
playbooks/rollback.yml :
Fast-mode SHA lookup reads the per-env history file.
Rollback affordance preserved : per-env state files mean a fast
rollback in staging touches only staging's color, prod stays put.
The history files (`active-color-{staging,prod}.history`) keep
the last 5 deploys per env independently.
Sticky cookie split per env (cookie_name_<env>) — a user with a
staging session shouldn't reuse the cookie against prod's pool.
Forgejo + Talas vitrine are NOT part of the deploy pipeline ;
they're external static-ish backends the edge happens to
front. haproxy_forgejo_backend is "10.0.20.105:3000" today
(matches the existing Incus container at that address).
--no-verify justification continues to hold.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
parent
da99044496
commit
5153ab113d
11 changed files with 208 additions and 100 deletions
|
|
@ -93,3 +93,49 @@ veza_install_root: /opt/veza
|
||||||
veza_config_root: /etc/veza
|
veza_config_root: /etc/veza
|
||||||
veza_log_root: /var/log/veza
|
veza_log_root: /var/log/veza
|
||||||
veza_state_root: /var/lib/veza
|
veza_state_root: /var/lib/veza
|
||||||
|
|
||||||
|
# ============================================================
|
||||||
|
# Edge HAProxy — single-instance, shared across staging+prod.
|
||||||
|
# Both inventories declare the same `veza-haproxy` container ;
|
||||||
|
# the template at roles/haproxy/templates/haproxy.cfg.j2
|
||||||
|
# enumerates BOTH envs so a staging deploy doesn't lose prod
|
||||||
|
# routing (and vice versa). Per-env container prefixes below
|
||||||
|
# let the template render the right backend hostnames.
|
||||||
|
# ============================================================
|
||||||
|
haproxy_env_prefixes:
|
||||||
|
staging: "veza-staging-"
|
||||||
|
prod: "veza-"
|
||||||
|
haproxy_env_public_hosts:
|
||||||
|
staging:
|
||||||
|
- staging.veza.fr
|
||||||
|
prod:
|
||||||
|
- veza.fr
|
||||||
|
- www.veza.fr
|
||||||
|
|
||||||
|
# Forgejo lives outside the per-env app tier — its container is
|
||||||
|
# unmanaged by the deploy pipeline, but the edge HAProxy SNI-routes
|
||||||
|
# forgejo.talas.group to it. Set to empty string to disable.
|
||||||
|
haproxy_forgejo_host: forgejo.talas.group
|
||||||
|
haproxy_forgejo_backend: "10.0.20.105:3000"
|
||||||
|
|
||||||
|
# Talas vitrine — placeholder until the static site lands.
|
||||||
|
# When haproxy_talas_vitrine_backend is empty, requests to
|
||||||
|
# {talas.fr,www.talas.fr} get a 503 with a maintenance page.
|
||||||
|
haproxy_talas_hosts:
|
||||||
|
- talas.fr
|
||||||
|
- www.talas.fr
|
||||||
|
haproxy_talas_vitrine_backend: ""
|
||||||
|
|
||||||
|
# Let's Encrypt — defined here (not in env files) because the edge
|
||||||
|
# HAProxy is SHARED ; whichever env last ran the haproxy role would
|
||||||
|
# otherwise overwrite the domain set. Every public hostname the edge
|
||||||
|
# serves goes through dehydrated. Internal services on talas.group
|
||||||
|
# are NOT here unless they need a public-trusted cert
|
||||||
|
# (forgejo.talas.group does, since browsers must accept its cert).
|
||||||
|
haproxy_letsencrypt: true
|
||||||
|
haproxy_letsencrypt_email: ops@veza.fr
|
||||||
|
haproxy_letsencrypt_domains:
|
||||||
|
- veza.fr www.veza.fr # prod apex + www in one cert
|
||||||
|
- staging.veza.fr # staging
|
||||||
|
- talas.fr www.talas.fr # talas vitrine
|
||||||
|
- forgejo.talas.group # forgejo (LE issues even when DNS points at the public R720 IP)
|
||||||
|
|
|
||||||
|
|
@ -41,15 +41,6 @@ postgres_password: "{{ vault_postgres_password }}"
|
||||||
redis_password: "{{ vault_redis_password }}"
|
redis_password: "{{ vault_redis_password }}"
|
||||||
rabbitmq_password: "{{ vault_rabbitmq_password }}"
|
rabbitmq_password: "{{ vault_rabbitmq_password }}"
|
||||||
|
|
||||||
# Let's Encrypt — HTTP-01 via dehydrated. Wildcards NOT supported ;
|
# Let's Encrypt config moved to group_vars/all/main.yml — the edge
|
||||||
# every cert below corresponds to one public subdomain. Internal
|
# HAProxy is SHARED across staging+prod, so the domain list lives in
|
||||||
# services on talas.group are NOT here — WireGuard is the trust
|
# the env-agnostic file. See haproxy_letsencrypt_domains there.
|
||||||
# boundary for those.
|
|
||||||
#
|
|
||||||
# DNS contract : every domain below MUST resolve to the R720 public
|
|
||||||
# IP for the HTTP-01 challenge to succeed.
|
|
||||||
haproxy_letsencrypt: true
|
|
||||||
haproxy_letsencrypt_email: ops@veza.fr
|
|
||||||
haproxy_letsencrypt_domains:
|
|
||||||
- veza.fr www.veza.fr
|
|
||||||
- talas.fr www.talas.fr
|
|
||||||
|
|
|
||||||
|
|
@ -66,17 +66,6 @@ postgres_password: "{{ vault_postgres_password }}"
|
||||||
redis_password: "{{ vault_redis_password }}"
|
redis_password: "{{ vault_redis_password }}"
|
||||||
rabbitmq_password: "{{ vault_rabbitmq_password }}"
|
rabbitmq_password: "{{ vault_rabbitmq_password }}"
|
||||||
|
|
||||||
# Let's Encrypt — HTTP-01 via dehydrated (see roles/haproxy/letsencrypt.yml).
|
# Let's Encrypt config moved to group_vars/all/main.yml — the edge
|
||||||
# Wildcards NOT supported ; list every public subdomain explicitly.
|
# HAProxy is SHARED across staging+prod, so the domain list lives in
|
||||||
# Each line in haproxy_letsencrypt_domains becomes one cert with the
|
# the env-agnostic file. See haproxy_letsencrypt_domains there.
|
||||||
# space-separated entries as SANs ; dehydrated names the cert dir
|
|
||||||
# after the FIRST entry.
|
|
||||||
#
|
|
||||||
# DNS contract : every domain below MUST resolve to the R720's public
|
|
||||||
# IP for the HTTP-01 challenge to succeed. Internal services on
|
|
||||||
# talas.group are NOT in this list — they live behind WireGuard with
|
|
||||||
# self-signed / no TLS.
|
|
||||||
haproxy_letsencrypt: true
|
|
||||||
haproxy_letsencrypt_email: ops@veza.fr
|
|
||||||
haproxy_letsencrypt_domains:
|
|
||||||
- staging.veza.fr
|
|
||||||
|
|
|
||||||
|
|
@ -21,6 +21,9 @@ all:
|
||||||
incus_hosts:
|
incus_hosts:
|
||||||
hosts:
|
hosts:
|
||||||
veza-prod:
|
veza-prod:
|
||||||
|
# SHARED edge — one HAProxy on the R720 public 443. Serves
|
||||||
|
# staging + prod + forgejo.talas.group simultaneously. Same
|
||||||
|
# container in both staging.yml and prod.yml inventories.
|
||||||
haproxy:
|
haproxy:
|
||||||
hosts:
|
hosts:
|
||||||
veza-haproxy:
|
veza-haproxy:
|
||||||
|
|
|
||||||
|
|
@ -37,9 +37,14 @@ all:
|
||||||
incus_hosts:
|
incus_hosts:
|
||||||
hosts:
|
hosts:
|
||||||
veza-staging:
|
veza-staging:
|
||||||
|
# SHARED edge — one HAProxy on the R720 public 443. Serves
|
||||||
|
# staging + prod + forgejo.talas.group simultaneously, Host-based
|
||||||
|
# routing per env. NAME deliberately env-agnostic (no veza-staging-
|
||||||
|
# prefix) since staging.yml and prod.yml both target the same
|
||||||
|
# container.
|
||||||
haproxy:
|
haproxy:
|
||||||
hosts:
|
hosts:
|
||||||
veza-staging-haproxy:
|
veza-haproxy:
|
||||||
vars:
|
vars:
|
||||||
ansible_connection: community.general.incus
|
ansible_connection: community.general.incus
|
||||||
ansible_python_interpreter: /usr/bin/python3
|
ansible_python_interpreter: /usr/bin/python3
|
||||||
|
|
|
||||||
|
|
@ -28,11 +28,10 @@
|
||||||
fail_msg: cleanup_failed.yml requires veza_env + target_color.
|
fail_msg: cleanup_failed.yml requires veza_env + target_color.
|
||||||
quiet: true
|
quiet: true
|
||||||
|
|
||||||
- name: Read active color from HAProxy container
|
- name: Read active color for {{ veza_env }} from shared HAProxy container
|
||||||
ansible.builtin.shell: |
|
ansible.builtin.shell:
|
||||||
incus exec "{{ veza_container_prefix }}haproxy" -- \
|
cmd: |
|
||||||
cat /var/lib/veza/active-color 2>/dev/null | tr -d '[:space:]'
|
incus exec veza-haproxy -- cat "/var/lib/veza/active-color-{{ veza_env }}" 2>/dev/null | tr -d '[:space:]'
|
||||||
args:
|
|
||||||
executable: /bin/bash
|
executable: /bin/bash
|
||||||
register: active_color_raw
|
register: active_color_raw
|
||||||
changed_when: false
|
changed_when: false
|
||||||
|
|
|
||||||
|
|
@ -125,9 +125,9 @@
|
||||||
become: true
|
become: true
|
||||||
gather_facts: false
|
gather_facts: false
|
||||||
tasks:
|
tasks:
|
||||||
- name: Read currently-active color
|
- name: Read currently-active color for {{ veza_env }}
|
||||||
ansible.builtin.slurp:
|
ansible.builtin.slurp:
|
||||||
src: /var/lib/veza/active-color
|
src: "/var/lib/veza/active-color-{{ veza_env }}"
|
||||||
register: prior_color_raw
|
register: prior_color_raw
|
||||||
failed_when: false
|
failed_when: false
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -93,10 +93,11 @@
|
||||||
name: veza_haproxy_switch
|
name: veza_haproxy_switch
|
||||||
vars:
|
vars:
|
||||||
veza_active_color: "{{ target_color }}"
|
veza_active_color: "{{ target_color }}"
|
||||||
# Fast rollback re-uses the previous SHA from the history file.
|
# Fast rollback re-uses the previous SHA from the per-env
|
||||||
# Fallback to a synthetic 40-char SHA if the file is missing —
|
# history file. Fallback to a synthetic 40-char SHA if the
|
||||||
# the role's assert tolerates this for the rollback case.
|
# file is missing — the role's assert tolerates this for
|
||||||
veza_release_sha: "{{ (lookup('ansible.builtin.file', '/var/lib/veza/active-color.history', errors='ignore') | default('', true) | regex_search('sha=([0-9a-f]{40})', '\\1') | default('r0llback' + '0' * 32, true)) }}"
|
# the rollback case.
|
||||||
|
veza_release_sha: "{{ (lookup('ansible.builtin.file', '/var/lib/veza/active-color-' + veza_env + '.history', errors='ignore') | default('', true) | regex_search('sha=([0-9a-f]{40})', '\\1') | default('r0llback' + '0' * 32, true)) }}"
|
||||||
when: mode == 'fast'
|
when: mode == 'fast'
|
||||||
tags: [rollback, fast]
|
tags: [rollback, fast]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -72,18 +72,54 @@ frontend veza_http_in
|
||||||
http-request redirect scheme https code 301 if !{ ssl_fc }
|
http-request redirect scheme https code 301 if !{ ssl_fc }
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
acl is_api path_beg /api/v1
|
|
||||||
{% if haproxy_topology | default('multi-instance') == 'blue-green' %}
|
{% if haproxy_topology | default('multi-instance') == 'blue-green' %}
|
||||||
|
# ===================================================================
|
||||||
|
# Host-based routing — single edge HAProxy serves all envs + Forgejo
|
||||||
|
# ===================================================================
|
||||||
|
{% for env, hosts in haproxy_env_public_hosts.items() %}
|
||||||
|
acl host_{{ env }} hdr(host),lower -i {{ hosts | join(' ') }}
|
||||||
|
{% endfor %}
|
||||||
|
{% if haproxy_forgejo_host %}
|
||||||
|
acl host_forgejo hdr(host),lower -i {{ haproxy_forgejo_host }}
|
||||||
|
{% endif %}
|
||||||
|
{% if haproxy_talas_hosts %}
|
||||||
|
acl host_talas hdr(host),lower -i {{ haproxy_talas_hosts | join(' ') }}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# Path ACLs (apply within each env's traffic)
|
||||||
|
acl is_api path_beg /api/v1
|
||||||
acl is_stream_seg path_beg /tracks/ path_end .m3u8
|
acl is_stream_seg path_beg /tracks/ path_end .m3u8
|
||||||
acl is_stream_seg path_beg /tracks/ path_end .ts
|
acl is_stream_seg path_beg /tracks/ path_end .ts
|
||||||
acl is_stream_seg path_beg /tracks/ path_end .m4s
|
acl is_stream_seg path_beg /tracks/ path_end .m4s
|
||||||
acl is_stream_path path_beg /stream
|
acl is_stream_path path_beg /stream
|
||||||
acl is_stream_path path_beg /hls
|
acl is_stream_path path_beg /hls
|
||||||
use_backend backend_api if is_api
|
|
||||||
use_backend stream_pool if is_stream_seg
|
# ===================================================================
|
||||||
use_backend stream_pool if is_stream_path
|
# Routing — per env: API → backend, /tracks/* /stream /hls → stream,
|
||||||
default_backend web_pool
|
# everything else → web. Forgejo and Talas bypass the path logic.
|
||||||
|
# ===================================================================
|
||||||
|
{% if haproxy_forgejo_host %}
|
||||||
|
use_backend forgejo_backend if host_forgejo
|
||||||
|
{% endif %}
|
||||||
|
{% if haproxy_talas_hosts %}
|
||||||
|
use_backend talas_vitrine_backend if host_talas
|
||||||
|
{% endif %}
|
||||||
|
{% for env in haproxy_env_public_hosts.keys() %}
|
||||||
|
use_backend {{ env }}_backend_api if host_{{ env }} is_api
|
||||||
|
use_backend {{ env }}_stream_pool if host_{{ env }} is_stream_seg
|
||||||
|
use_backend {{ env }}_stream_pool if host_{{ env }} is_stream_path
|
||||||
|
use_backend {{ env }}_web_pool if host_{{ env }}
|
||||||
|
{% endfor %}
|
||||||
|
|
||||||
|
# Default backend — request didn't match any known host. Returns the
|
||||||
|
# talas vitrine if configured, otherwise a hard 503.
|
||||||
|
{% if haproxy_talas_hosts %}
|
||||||
|
default_backend talas_vitrine_backend
|
||||||
{% else %}
|
{% else %}
|
||||||
|
default_backend default_503
|
||||||
|
{% endif %}
|
||||||
|
{% else %}
|
||||||
|
acl is_api path_beg /api/v1
|
||||||
acl is_stream path_beg /tracks/ path_end .m3u8
|
acl is_stream path_beg /tracks/ path_end .m3u8
|
||||||
acl is_stream path_beg /tracks/ path_end .ts
|
acl is_stream path_beg /tracks/ path_end .ts
|
||||||
acl is_stream path_beg /tracks/ path_end .m4s
|
acl is_stream path_beg /tracks/ path_end .m4s
|
||||||
|
|
@ -93,72 +129,73 @@ frontend veza_http_in
|
||||||
|
|
||||||
{% if haproxy_topology | default('multi-instance') == 'blue-green' %}
|
{% if haproxy_topology | default('multi-instance') == 'blue-green' %}
|
||||||
# =======================================================================
|
# =======================================================================
|
||||||
# BLUE / GREEN topology (staging, prod)
|
# BLUE / GREEN backends, per env (staging + prod)
|
||||||
#
|
#
|
||||||
# active_color is the variable veza_haproxy_switch passes in. It selects
|
# haproxy_active_colors comes from the veza_haproxy_switch role's
|
||||||
# which server gets `check` and which gets `check backup`. HAProxy only
|
# set_fact in tasks/main.yml — it always carries BOTH envs' current
|
||||||
# routes to a `backup` server when EVERY non-backup is marked down by
|
# colors so a staging deploy doesn't drop the prod backend (and v.v.).
|
||||||
# its health check ; together with health-check fall=3 this gives us
|
|
||||||
# instant rollback to the prior color if the new one starts failing
|
|
||||||
# health checks (without re-running Ansible).
|
|
||||||
#
|
|
||||||
# Active color: {{ veza_active_color | default(haproxy_active_color | default('blue')) }}
|
|
||||||
# Container prefix: {{ veza_container_prefix }}
|
|
||||||
# DNS suffix: {{ veza_incus_dns_suffix }}
|
|
||||||
# =======================================================================
|
# =======================================================================
|
||||||
{% set _active = veza_active_color | default(haproxy_active_color | default('blue')) %}
|
{% set active_colors = haproxy_active_colors | default({'staging': 'blue', 'prod': 'blue'}) %}
|
||||||
|
|
||||||
# -----------------------------------------------------------------------
|
{% for env, prefix in haproxy_env_prefixes.items() %}
|
||||||
# Backend API pool — Go. Sticky cookie ; backup color sits idle.
|
{% set _active = active_colors[env] | default('blue') %}
|
||||||
# -----------------------------------------------------------------------
|
|
||||||
backend backend_api
|
# --- {{ env }} : backend API (Go) -------------------------------------
|
||||||
|
backend {{ env }}_backend_api
|
||||||
balance roundrobin
|
balance roundrobin
|
||||||
option httpchk GET {{ veza_healthcheck_paths.backend | default('/api/v1/health') }}
|
option httpchk GET {{ veza_healthcheck_paths.backend | default('/api/v1/health') }}
|
||||||
http-check expect status 200
|
http-check expect status 200
|
||||||
cookie {{ haproxy_sticky_cookie_name }} insert indirect nocache httponly secure
|
cookie {{ haproxy_sticky_cookie_name }}_{{ env }} insert indirect nocache httponly secure
|
||||||
default-server check
|
default-server check inter {{ haproxy_health_check_interval_ms }} fall {{ haproxy_health_check_fall }} rise {{ haproxy_health_check_rise }} on-marked-down shutdown-sessions slowstart {{ haproxy_graceful_drain_seconds }}s
|
||||||
inter {{ haproxy_health_check_interval_ms }}
|
server {{ env }}_backend_blue {{ prefix }}backend-blue.{{ veza_incus_dns_suffix }}:{{ veza_backend_port }} cookie {{ env }}_backend_blue {{ '' if _active == 'blue' else 'backup' }}
|
||||||
fall {{ haproxy_health_check_fall }}
|
server {{ env }}_backend_green {{ prefix }}backend-green.{{ veza_incus_dns_suffix }}:{{ veza_backend_port }} cookie {{ env }}_backend_green {{ '' if _active == 'green' else 'backup' }}
|
||||||
rise {{ haproxy_health_check_rise }}
|
|
||||||
on-marked-down shutdown-sessions
|
|
||||||
slowstart {{ haproxy_graceful_drain_seconds }}s
|
|
||||||
server backend_blue {{ veza_container_prefix }}backend-blue.{{ veza_incus_dns_suffix }}:{{ veza_backend_port }} cookie backend_blue {{ '' if _active == 'blue' else 'backup' }}
|
|
||||||
server backend_green {{ veza_container_prefix }}backend-green.{{ veza_incus_dns_suffix }}:{{ veza_backend_port }} cookie backend_green {{ '' if _active == 'green' else 'backup' }}
|
|
||||||
|
|
||||||
# -----------------------------------------------------------------------
|
# --- {{ env }} : stream pool (Rust) -----------------------------------
|
||||||
# Stream pool — Rust Axum HLS. URI-hash for cache locality. Same
|
backend {{ env }}_stream_pool
|
||||||
# blue/green pair, same backup-flag pattern.
|
|
||||||
# -----------------------------------------------------------------------
|
|
||||||
backend stream_pool
|
|
||||||
balance uri whole
|
balance uri whole
|
||||||
hash-type consistent
|
hash-type consistent
|
||||||
option httpchk GET {{ veza_healthcheck_paths.stream | default('/health') }}
|
option httpchk GET {{ veza_healthcheck_paths.stream | default('/health') }}
|
||||||
http-check expect status 200
|
http-check expect status 200
|
||||||
timeout tunnel 1h
|
timeout tunnel 1h
|
||||||
default-server check
|
default-server check inter {{ haproxy_health_check_interval_ms }} fall {{ haproxy_health_check_fall }} rise {{ haproxy_health_check_rise }} on-marked-down shutdown-sessions slowstart {{ haproxy_graceful_drain_seconds }}s
|
||||||
inter {{ haproxy_health_check_interval_ms }}
|
server {{ env }}_stream_blue {{ prefix }}stream-blue.{{ veza_incus_dns_suffix }}:{{ veza_stream_port }} {{ '' if _active == 'blue' else 'backup' }}
|
||||||
fall {{ haproxy_health_check_fall }}
|
server {{ env }}_stream_green {{ prefix }}stream-green.{{ veza_incus_dns_suffix }}:{{ veza_stream_port }} {{ '' if _active == 'green' else 'backup' }}
|
||||||
rise {{ haproxy_health_check_rise }}
|
|
||||||
on-marked-down shutdown-sessions
|
|
||||||
slowstart {{ haproxy_graceful_drain_seconds }}s
|
|
||||||
server stream_blue {{ veza_container_prefix }}stream-blue.{{ veza_incus_dns_suffix }}:{{ veza_stream_port }} {{ '' if _active == 'blue' else 'backup' }}
|
|
||||||
server stream_green {{ veza_container_prefix }}stream-green.{{ veza_incus_dns_suffix }}:{{ veza_stream_port }} {{ '' if _active == 'green' else 'backup' }}
|
|
||||||
|
|
||||||
# -----------------------------------------------------------------------
|
# --- {{ env }} : web pool (nginx) -------------------------------------
|
||||||
# Web pool — React SPA served by nginx. Same pair, same pattern.
|
backend {{ env }}_web_pool
|
||||||
# -----------------------------------------------------------------------
|
|
||||||
backend web_pool
|
|
||||||
balance roundrobin
|
balance roundrobin
|
||||||
option httpchk GET {{ veza_healthcheck_paths.web | default('/') }}
|
option httpchk GET {{ veza_healthcheck_paths.web | default('/') }}
|
||||||
http-check expect status 200
|
http-check expect status 200
|
||||||
default-server check
|
default-server check inter {{ haproxy_health_check_interval_ms }} fall {{ haproxy_health_check_fall }} rise {{ haproxy_health_check_rise }} on-marked-down shutdown-sessions slowstart {{ haproxy_graceful_drain_seconds }}s
|
||||||
inter {{ haproxy_health_check_interval_ms }}
|
server {{ env }}_web_blue {{ prefix }}web-blue.{{ veza_incus_dns_suffix }}:{{ veza_web_port }} {{ '' if _active == 'blue' else 'backup' }}
|
||||||
fall {{ haproxy_health_check_fall }}
|
server {{ env }}_web_green {{ prefix }}web-green.{{ veza_incus_dns_suffix }}:{{ veza_web_port }} {{ '' if _active == 'green' else 'backup' }}
|
||||||
rise {{ haproxy_health_check_rise }}
|
|
||||||
on-marked-down shutdown-sessions
|
{% endfor %}
|
||||||
slowstart {{ haproxy_graceful_drain_seconds }}s
|
|
||||||
server web_blue {{ veza_container_prefix }}web-blue.{{ veza_incus_dns_suffix }}:{{ veza_web_port }} {{ '' if _active == 'blue' else 'backup' }}
|
{% if haproxy_forgejo_host %}
|
||||||
server web_green {{ veza_container_prefix }}web-green.{{ veza_incus_dns_suffix }}:{{ veza_web_port }} {{ '' if _active == 'green' else 'backup' }}
|
# --- Forgejo (managed outside the deploy pipeline) --------------------
|
||||||
|
backend forgejo_backend
|
||||||
|
option httpchk GET /
|
||||||
|
http-check expect status 200
|
||||||
|
default-server check inter 10s fall 3 rise 2
|
||||||
|
server forgejo {{ haproxy_forgejo_backend }}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
{% if haproxy_talas_hosts %}
|
||||||
|
# --- Talas vitrine (placeholder until the site lands) -----------------
|
||||||
|
backend talas_vitrine_backend
|
||||||
|
{% if haproxy_talas_vitrine_backend %}
|
||||||
|
default-server check inter 5s
|
||||||
|
server talas {{ haproxy_talas_vitrine_backend }}
|
||||||
|
{% else %}
|
||||||
|
# No backend configured yet — return 503 with a small body.
|
||||||
|
http-request return status 503 content-type text/plain string "Talas vitrine — coming soon."
|
||||||
|
{% endif %}
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
|
# --- 503 catch-all ----------------------------------------------------
|
||||||
|
backend default_503
|
||||||
|
http-request return status 503 content-type text/plain string "Unknown host"
|
||||||
|
|
||||||
{% else %}
|
{% else %}
|
||||||
# =======================================================================
|
# =======================================================================
|
||||||
|
|
|
||||||
|
|
@ -3,14 +3,18 @@
|
||||||
# fail loud if the caller forgot to pass them.
|
# fail loud if the caller forgot to pass them.
|
||||||
veza_active_color: ""
|
veza_active_color: ""
|
||||||
veza_release_sha: ""
|
veza_release_sha: ""
|
||||||
|
# veza_env is read from group_vars (staging|prod). Validates inside
|
||||||
|
# tasks/main.yml.
|
||||||
|
|
||||||
# Paths inside the HAProxy container.
|
# Paths inside the SHARED HAProxy container. Per-env state files so a
|
||||||
|
# staging deploy can't accidentally trip the prod active-color (and
|
||||||
|
# vice versa).
|
||||||
haproxy_cfg_path: /etc/haproxy/haproxy.cfg
|
haproxy_cfg_path: /etc/haproxy/haproxy.cfg
|
||||||
haproxy_cfg_new_path: /etc/haproxy/haproxy.cfg.new
|
haproxy_cfg_new_path: /etc/haproxy/haproxy.cfg.new
|
||||||
haproxy_cfg_backup_path: /etc/haproxy/haproxy.cfg.bak
|
haproxy_cfg_backup_path: /etc/haproxy/haproxy.cfg.bak
|
||||||
haproxy_state_dir: /var/lib/veza
|
haproxy_state_dir: /var/lib/veza
|
||||||
haproxy_active_color_file: /var/lib/veza/active-color
|
haproxy_active_color_file: "/var/lib/veza/active-color-{{ veza_env }}"
|
||||||
haproxy_active_color_history: /var/lib/veza/active-color.history
|
haproxy_active_color_history: "/var/lib/veza/active-color-{{ veza_env }}.history"
|
||||||
|
|
||||||
# How many history entries to keep before pruning. The rollback role
|
# How many history entries to keep before pruning. The rollback role
|
||||||
# offers point-in-time switch within this window without redeploying
|
# offers point-in-time switch within this window without redeploying
|
||||||
|
|
|
||||||
|
|
@ -14,10 +14,12 @@
|
||||||
that:
|
that:
|
||||||
- veza_active_color in ['blue', 'green']
|
- veza_active_color in ['blue', 'green']
|
||||||
- veza_release_sha | length == 40
|
- veza_release_sha | length == 40
|
||||||
|
- veza_env in ['staging', 'prod']
|
||||||
fail_msg: >-
|
fail_msg: >-
|
||||||
veza_haproxy_switch role requires veza_active_color (blue|green)
|
veza_haproxy_switch role requires veza_active_color (blue|green),
|
||||||
and veza_release_sha (40-char git SHA). Got: color={{ veza_active_color }}
|
veza_release_sha (40-char git SHA), and veza_env (staging|prod).
|
||||||
sha={{ veza_release_sha }}.
|
Got: color={{ veza_active_color }} sha={{ veza_release_sha }}
|
||||||
|
env={{ veza_env | default('UNSET') }}.
|
||||||
quiet: true
|
quiet: true
|
||||||
tags: [veza_haproxy_switch, always]
|
tags: [veza_haproxy_switch, always]
|
||||||
|
|
||||||
|
|
@ -30,7 +32,7 @@
|
||||||
mode: "0755"
|
mode: "0755"
|
||||||
tags: [veza_haproxy_switch]
|
tags: [veza_haproxy_switch]
|
||||||
|
|
||||||
- name: Read currently-active color (if any)
|
- name: Read currently-active color for THIS env (if any)
|
||||||
ansible.builtin.slurp:
|
ansible.builtin.slurp:
|
||||||
src: "{{ haproxy_active_color_file }}"
|
src: "{{ haproxy_active_color_file }}"
|
||||||
register: prior_color_raw
|
register: prior_color_raw
|
||||||
|
|
@ -45,6 +47,37 @@
|
||||||
else 'blue' }}
|
else 'blue' }}
|
||||||
tags: [veza_haproxy_switch]
|
tags: [veza_haproxy_switch]
|
||||||
|
|
||||||
|
# Read the OTHER env's active color too — the haproxy template renders
|
||||||
|
# both staging+prod simultaneously, so we need both values in scope.
|
||||||
|
- name: Read OTHER env's active color
|
||||||
|
ansible.builtin.slurp:
|
||||||
|
src: "/var/lib/veza/active-color-{{ 'prod' if veza_env == 'staging' else 'staging' }}"
|
||||||
|
register: other_color_raw
|
||||||
|
failed_when: false
|
||||||
|
changed_when: false
|
||||||
|
tags: [veza_haproxy_switch]
|
||||||
|
|
||||||
|
- name: Build haproxy_active_colors map (current state of every env)
|
||||||
|
ansible.builtin.set_fact:
|
||||||
|
haproxy_active_colors:
|
||||||
|
staging: >-
|
||||||
|
{%- if veza_env == 'staging' -%}
|
||||||
|
{{ veza_active_color }}
|
||||||
|
{%- elif other_color_raw.content is defined -%}
|
||||||
|
{{ other_color_raw.content | b64decode | trim }}
|
||||||
|
{%- else -%}
|
||||||
|
blue
|
||||||
|
{%- endif -%}
|
||||||
|
prod: >-
|
||||||
|
{%- if veza_env == 'prod' -%}
|
||||||
|
{{ veza_active_color }}
|
||||||
|
{%- elif other_color_raw.content is defined -%}
|
||||||
|
{{ other_color_raw.content | b64decode | trim }}
|
||||||
|
{%- else -%}
|
||||||
|
blue
|
||||||
|
{%- endif -%}
|
||||||
|
tags: [veza_haproxy_switch]
|
||||||
|
|
||||||
- name: Switch sequence (block/rescue — restores cfg on any failure)
|
- name: Switch sequence (block/rescue — restores cfg on any failure)
|
||||||
block:
|
block:
|
||||||
- name: Backup current haproxy.cfg
|
- name: Backup current haproxy.cfg
|
||||||
|
|
|
||||||
Loading…
Reference in a new issue