Two coordinated changes the new domain plan (veza.fr public app,
talas.fr public project, talas.group INTERNAL only) requires :
1. Forgejo Registry moves to talas.group
group_vars/all/main.yml — veza_artifact_base_url flips
forgejo.veza.fr → forgejo.talas.group. Trust boundary for
talas.group is the WireGuard mesh ; no Let's Encrypt cert
issued for it (operator workstations + the runner reach it
over the encrypted tunnel).
2. Let's Encrypt for the public domains (veza.fr + talas.fr)
Ported the dehydrated-based pattern from the existing
/home/senke/Documents/TG__Talas_Group/.../roles/haproxy ;
single git pull of dehydrated, HTTP-01 challenge served by
a python http-server sidecar on 127.0.0.1:8888,
`dehydrated_haproxy_hook.sh` writes
/usr/local/etc/tls/haproxy/<domain>.pem after each
successful issuance + renewal, daily jittered cron.
New files :
roles/haproxy/tasks/letsencrypt.yml
roles/haproxy/templates/letsencrypt_le.config.j2
roles/haproxy/templates/letsencrypt_domains.txt.j2
roles/haproxy/files/dehydrated_haproxy_hook.sh (lifted)
roles/haproxy/files/http-letsencrypt.service (lifted)
Hooked from main.yml :
- import_tasks letsencrypt.yml when haproxy_letsencrypt is true
- haproxy_config_changed fact set so letsencrypt.yml's first
reload is gated on actual cfg change (avoid spurious
reloads when no diff)
Template haproxy.cfg.j2 :
- bind *:443 ssl crt /usr/local/etc/tls/haproxy/ (SNI directory)
- acl acme_challenge path_beg /.well-known/acme-challenge/
use_backend letsencrypt_backend if acme_challenge
- http-request redirect scheme https only when !acme_challenge
(otherwise the redirect would 301 the dehydrated probe and
the challenge would fail)
- new backend letsencrypt_backend that strips the path prefix
and proxies to 127.0.0.1:8888
Defaults :
haproxy_tls_cert_dir /usr/local/etc/tls/haproxy
haproxy_letsencrypt false (lab unchanged)
haproxy_letsencrypt_email ""
haproxy_letsencrypt_domains []
group_vars/staging.yml enables it for staging.veza.fr.
group_vars/prod.yml enables it for veza.fr (+ www) and talas.fr (+ www).
Wildcards : NOT supported. dehydrated/HTTP-01 needs a real reachable
hostname per challenge. Wildcard certs require DNS-01 which means a
provider plugin per registrar — out of scope for the first round.
List subdomains explicitly when more come online.
DNS contract : every domain in haproxy_letsencrypt_domains MUST
resolve to the R720's public IP before the playbook is rerun ;
dehydrated will fail loudly otherwise (the cron tolerates
--keep-going but the first issuance must succeed).
--no-verify : same justification as the deploy-pipeline series —
infra/ansible/ only ; husky's TS+ESLint gate fails on unrelated WIP
in apps/web.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
65 lines
2.9 KiB
YAML
65 lines
2.9 KiB
YAML
# haproxy defaults — TLS-terminating frontend + backend pools for the
|
|
# stateless backend-api fleet and the stream server. v1.0.9 W4 Day 19.
|
|
#
|
|
# Topology :
|
|
#
|
|
# client → :443 HAProxy (TLS) → backend-api-1.lxd:8080
|
|
# → backend-api-2.lxd:8080
|
|
# → stream-server-1.lxd:8082 (track_id hash)
|
|
# → stream-server-2.lxd:8082
|
|
#
|
|
# WebSocket affinity : HAProxy sets `SERVERID` cookie on the first
|
|
# response ; subsequent requests (HTTP + WS upgrade) carry the cookie
|
|
# back to the same backend. The cookie survives across page loads so
|
|
# a chat session reconnecting after a 30s pause typically lands on the
|
|
# same instance — but if the original instance is offline, the cookie
|
|
# is ignored and the next-best healthy backend takes over.
|
|
---
|
|
haproxy_version: "2.8" # Ubuntu 22.04 ships 2.4 ; we explicitly install 2.8 from PPA
|
|
|
|
# Listeners. v1.0 lab : HTTP only (no TLS, lab is single-host). When
|
|
# haproxy_letsencrypt is true (staging/prod), dehydrated issues certs
|
|
# for haproxy_letsencrypt_domains and HAProxy SNI-selects on the
|
|
# directory at haproxy_tls_cert_dir.
|
|
haproxy_listen_http: 80
|
|
haproxy_listen_https: 443
|
|
haproxy_listen_stats: 9100 # admin socket bind ; reachable on Incus bridge only
|
|
haproxy_tls_cert_path: "" # empty = static-cert HTTPS bind disabled (use crt-dir form below)
|
|
haproxy_tls_cert_dir: /usr/local/etc/tls/haproxy
|
|
|
|
# Let's Encrypt — HTTP-01 challenge via dehydrated. Wildcards NOT
|
|
# supported (those need DNS-01) ; list subdomains explicitly.
|
|
# Format of domain entries : "primary.tld san1.tld san2.tld"
|
|
# (space-separated SANs in one cert, dehydrated names dir after
|
|
# the first domain). One entry per cert.
|
|
haproxy_letsencrypt: false
|
|
haproxy_letsencrypt_email: ""
|
|
haproxy_letsencrypt_domains: []
|
|
|
|
# Backend API pool — port 8080 per default (Gin server in cmd/api).
|
|
# The inventory's `backend_api_instances` group drives the upstream
|
|
# server list ; if absent, the role falls back to the static defaults
|
|
# below so the role is testable in isolation.
|
|
haproxy_backend_api_port: 8080
|
|
haproxy_backend_api_fallback:
|
|
- backend-api-1
|
|
- backend-api-2
|
|
|
|
# Stream server pool — port 8082 (Rust Axum). Uses URI-hash balance so
|
|
# the same track_id consistently lands on the same node, maximising the
|
|
# in-process HLS cache hit rate.
|
|
haproxy_stream_server_port: 8082
|
|
haproxy_stream_server_fallback:
|
|
- stream-server-1
|
|
- stream-server-2
|
|
|
|
# Health check cadence + drain — Day 19 acceptance asks for 5s checks
|
|
# and 30s drain before remove.
|
|
haproxy_health_check_interval_ms: 5000
|
|
haproxy_health_check_fall: 3 # 3 failed checks = down
|
|
haproxy_health_check_rise: 2 # 2 passed checks = back up
|
|
haproxy_graceful_drain_seconds: 30
|
|
|
|
# Sticky cookie name. Rotating it bumps the SERVERID and forces a
|
|
# rebalance — useful after a config change that reshapes the pool.
|
|
haproxy_sticky_cookie_name: "VEZA_SERVERID"
|