Some checks failed
Veza CI / Notify on failure (push) Blocked by required conditions
Veza CI / Rust (Stream Server) (push) Successful in 3m45s
Security Scan / Secret Scanning (gitleaks) (push) Successful in 1m0s
Veza CI / Backend (Go) (push) Successful in 5m38s
Veza CI / Frontend (Web) (push) Has been cancelled
E2E Playwright / e2e (full) (push) Has been cancelled
ROADMAP_V1.0_LAUNCH.md §Semaine 2 day 6 deliverable: Postgres HA
ready to fail over in < 60s, asserted by an automated test script.
Topology — 3 Incus containers per environment:
pgaf-monitor pg_auto_failover state machine (single instance)
pgaf-primary first registered → primary
pgaf-replica second registered → hot-standby (sync rep)
Files:
infra/ansible/playbooks/postgres_ha.yml
Provisions the 3 containers via `incus launch images:ubuntu/22.04`
on the incus_hosts group, applies `common` baseline, then runs
`postgres_ha` on monitor first, then on data nodes serially
(primary registers before replica — pg_auto_failover assigns
roles by registration order, no manual flag needed).
infra/ansible/roles/postgres_ha/
defaults/main.yml — postgres_version pinned to 16, sync-standbys
= 1, replication-quorum = true. App user/dbname for the
formation. Password sourced from vault (placeholder default
`changeme-DEV-ONLY` so missing vault doesn't silently set a
weak prod password — the role reads the value but does NOT
auto-create the app user; that's a follow-up via psql/SQL
provisioning when the backend wires DATABASE_URL.).
tasks/install.yml — PGDG apt repo + postgresql-16 +
postgresql-16-auto-failover + pg-auto-failover-cli +
python3-psycopg2. Stops the default postgres@16-main service
because pg_auto_failover manages its own instance.
tasks/monitor.yml — `pg_autoctl create monitor`, gated on the
absence of `<pgdata>/postgresql.conf` so re-runs no-op.
Renders systemd unit `pg_autoctl.service` and starts it.
tasks/node.yml — `pg_autoctl create postgres` joining the
monitor URI from defaults. Sets formation sync-standbys
policy idempotently from any node.
templates/pg_autoctl-{monitor,node}.service.j2 — minimal
systemd units, Restart=on-failure, NOFILE=65536.
README.md — operations cheatsheet (state, URI, manual failover),
vault setup, ops scope (PgBouncer + pgBackRest + multi-region
explicitly out — landing W2 day 7-8 + v1.2+).
infra/ansible/inventory/lab.yml
Added `postgres_ha` group (with sub-groups `postgres_ha_monitor`
+ `postgres_ha_nodes`) wired to the `community.general.incus`
connection plugin so Ansible reaches each container via
`incus exec` on the lab host — no in-container SSH setup.
infra/ansible/tests/test_pg_failover.sh
The acceptance script. Sequence:
0. read formation state via monitor — abort if degraded baseline
1. `incus stop --force pgaf-primary` — start RTO timer
2. poll monitor every 1s for the standby's promotion
3. `incus start pgaf-primary` so the lab returns to a 2-node
healthy state for the next run
4. fail unless promotion happened within RTO_TARGET_SECONDS=60
Exit codes 0/1/2/3 (pass / unhealthy baseline / timeout / missing
tool) so a CI cron can plug in directly later.
Acceptance verified locally:
$ ansible-playbook -i inventory/lab.yml playbooks/postgres_ha.yml \
--syntax-check
playbook: playbooks/postgres_ha.yml ← clean
$ ansible-playbook -i inventory/lab.yml playbooks/postgres_ha.yml \
--list-tasks
4 plays, 22 tasks across plays, all tagged.
$ bash -n infra/ansible/tests/test_pg_failover.sh
syntax OK
Real `--check` + apply requires SSH access to the R720 + the
community.general collection installed (`ansible-galaxy collection
install community.general`). Operator runs that step.
Out of scope here (per ROADMAP §2 deferred):
- Multi-host data nodes (W2 day 7+ when Hetzner standby lands)
- HA monitor — single-monitor is fine for v1.0 scale
- PgBouncer (W2 day 7), pgBackRest (W2 day 8), OTel collector (W2 day 9)
SKIP_TESTS=1 — IaC YAML + bash, no app code.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
106 lines
4.5 KiB
Bash
Executable file
106 lines
4.5 KiB
Bash
Executable file
#!/usr/bin/env bash
|
|
# test_pg_failover.sh — validate pg_auto_failover RTO < 60s.
|
|
#
|
|
# Run on the Incus host that owns the pgaf-* containers (typically
|
|
# the lab R720 with `incus list` showing all three). Assumes the
|
|
# postgres_ha playbook has been applied so the formation is healthy
|
|
# at script start — bails early otherwise.
|
|
#
|
|
# v1.0.9 Day 6 — acceptance for ROADMAP_V1.0_LAUNCH.md §Semaine 2
|
|
# day 6: kill primary, time the standby's promotion, fail when > 60s.
|
|
#
|
|
# Usage:
|
|
# bash infra/ansible/tests/test_pg_failover.sh
|
|
#
|
|
# Exit codes:
|
|
# 0 — failover happened in < 60s (acceptance met)
|
|
# 1 — formation not healthy at start
|
|
# 2 — failover did not happen within 60s
|
|
# 3 — required tool missing on the host
|
|
set -euo pipefail
|
|
|
|
PRIMARY_CONTAINER=${PRIMARY_CONTAINER:-pgaf-primary}
|
|
REPLICA_CONTAINER=${REPLICA_CONTAINER:-pgaf-replica}
|
|
MONITOR_CONTAINER=${MONITOR_CONTAINER:-pgaf-monitor}
|
|
RTO_TARGET_SECONDS=${RTO_TARGET_SECONDS:-60}
|
|
PG_AUTO_FAILOVER_PGDATA=${PG_AUTO_FAILOVER_PGDATA:-/var/lib/postgresql/16/pgaf/postgres}
|
|
PG_AUTO_FAILOVER_MONITOR_PGDATA=${PG_AUTO_FAILOVER_MONITOR_PGDATA:-/var/lib/postgresql/16/pgaf/monitor}
|
|
|
|
log() { printf '[%s] %s\n' "$(date +%H:%M:%S)" "$*" >&2; }
|
|
fail() { log "FAIL: $*"; exit "${2:-2}"; }
|
|
|
|
require() {
|
|
command -v "$1" >/dev/null 2>&1 || fail "required tool missing on host: $1" 3
|
|
}
|
|
|
|
require incus
|
|
require date
|
|
require awk
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# 0. Sanity — formation must be healthy at start.
|
|
# -----------------------------------------------------------------------------
|
|
log "step 0: pre-flight — formation state via monitor"
|
|
state_before=$(incus exec "$MONITOR_CONTAINER" -- sudo -u postgres \
|
|
pg_autoctl show state --pgdata "$PG_AUTO_FAILOVER_MONITOR_PGDATA" 2>&1 || true)
|
|
log "monitor state:"
|
|
echo "$state_before" | sed 's/^/ /' >&2
|
|
|
|
if ! echo "$state_before" | grep -qE 'primary[[:space:]]+\|.*primary'; then
|
|
fail "no primary visible in formation state — refusing to test failover from a degraded baseline" 1
|
|
fi
|
|
if ! echo "$state_before" | grep -qE 'secondary[[:space:]]+\|.*secondary'; then
|
|
fail "no secondary visible — failover requires a hot standby ready to take over" 1
|
|
fi
|
|
|
|
primary_node=$(echo "$state_before" | awk '/primary[[:space:]]+\|/ {print $1; exit}')
|
|
log "current primary node: $primary_node (container: $PRIMARY_CONTAINER)"
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# 1. Kill primary container — simulates a hardware/process death.
|
|
# -----------------------------------------------------------------------------
|
|
log "step 1: stopping primary container ($PRIMARY_CONTAINER) — start timer"
|
|
t0=$(date +%s)
|
|
incus stop --force "$PRIMARY_CONTAINER"
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# 2. Poll the monitor until the standby is promoted.
|
|
# -----------------------------------------------------------------------------
|
|
log "step 2: polling monitor for failover (target RTO ${RTO_TARGET_SECONDS}s)"
|
|
deadline=$((t0 + RTO_TARGET_SECONDS))
|
|
promoted=0
|
|
while [ "$(date +%s)" -lt "$deadline" ]; do
|
|
state_now=$(incus exec "$MONITOR_CONTAINER" -- sudo -u postgres \
|
|
pg_autoctl show state --pgdata "$PG_AUTO_FAILOVER_MONITOR_PGDATA" 2>&1 || true)
|
|
|
|
# Replica's node name should now appear in the "primary" column AND
|
|
# the previous primary should appear as "demoted" / "draining" / "stopped".
|
|
if echo "$state_now" | grep -qE 'primary[[:space:]]+\|' \
|
|
&& ! echo "$state_now" | grep -qE "^[[:space:]]*${primary_node}[[:space:]]+\|.*primary"; then
|
|
promoted=1
|
|
break
|
|
fi
|
|
sleep 1
|
|
done
|
|
|
|
t1=$(date +%s)
|
|
elapsed=$((t1 - t0))
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# 3. Restart the killed container so the lab returns to a 2-node
|
|
# formation for subsequent runs.
|
|
# -----------------------------------------------------------------------------
|
|
log "step 3: restarting $PRIMARY_CONTAINER (it'll come back as standby once it catches up)"
|
|
incus start "$PRIMARY_CONTAINER" || true
|
|
|
|
# -----------------------------------------------------------------------------
|
|
# 4. Verdict.
|
|
# -----------------------------------------------------------------------------
|
|
if [ "$promoted" -eq 1 ] && [ "$elapsed" -le "$RTO_TARGET_SECONDS" ]; then
|
|
log "PASS: failover completed in ${elapsed}s (target ${RTO_TARGET_SECONDS}s)"
|
|
exit 0
|
|
fi
|
|
|
|
log "post-failover state:"
|
|
echo "$state_now" | sed 's/^/ /' >&2
|
|
fail "no standby promotion within ${RTO_TARGET_SECONDS}s (elapsed ${elapsed}s, promoted=${promoted})"
|