Some checks failed
Veza CI / Notify on failure (push) Blocked by required conditions
Veza CI / Rust (Stream Server) (push) Successful in 3m45s
Security Scan / Secret Scanning (gitleaks) (push) Successful in 1m0s
Veza CI / Backend (Go) (push) Successful in 5m38s
Veza CI / Frontend (Web) (push) Has been cancelled
E2E Playwright / e2e (full) (push) Has been cancelled
ROADMAP_V1.0_LAUNCH.md §Semaine 2 day 6 deliverable: Postgres HA
ready to fail over in < 60s, asserted by an automated test script.
Topology — 3 Incus containers per environment:
pgaf-monitor pg_auto_failover state machine (single instance)
pgaf-primary first registered → primary
pgaf-replica second registered → hot-standby (sync rep)
Files:
infra/ansible/playbooks/postgres_ha.yml
Provisions the 3 containers via `incus launch images:ubuntu/22.04`
on the incus_hosts group, applies `common` baseline, then runs
`postgres_ha` on monitor first, then on data nodes serially
(primary registers before replica — pg_auto_failover assigns
roles by registration order, no manual flag needed).
infra/ansible/roles/postgres_ha/
defaults/main.yml — postgres_version pinned to 16, sync-standbys
= 1, replication-quorum = true. App user/dbname for the
formation. Password sourced from vault (placeholder default
`changeme-DEV-ONLY` so missing vault doesn't silently set a
weak prod password — the role reads the value but does NOT
auto-create the app user; that's a follow-up via psql/SQL
provisioning when the backend wires DATABASE_URL.).
tasks/install.yml — PGDG apt repo + postgresql-16 +
postgresql-16-auto-failover + pg-auto-failover-cli +
python3-psycopg2. Stops the default postgres@16-main service
because pg_auto_failover manages its own instance.
tasks/monitor.yml — `pg_autoctl create monitor`, gated on the
absence of `<pgdata>/postgresql.conf` so re-runs no-op.
Renders systemd unit `pg_autoctl.service` and starts it.
tasks/node.yml — `pg_autoctl create postgres` joining the
monitor URI from defaults. Sets formation sync-standbys
policy idempotently from any node.
templates/pg_autoctl-{monitor,node}.service.j2 — minimal
systemd units, Restart=on-failure, NOFILE=65536.
README.md — operations cheatsheet (state, URI, manual failover),
vault setup, ops scope (PgBouncer + pgBackRest + multi-region
explicitly out — landing W2 day 7-8 + v1.2+).
infra/ansible/inventory/lab.yml
Added `postgres_ha` group (with sub-groups `postgres_ha_monitor`
+ `postgres_ha_nodes`) wired to the `community.general.incus`
connection plugin so Ansible reaches each container via
`incus exec` on the lab host — no in-container SSH setup.
infra/ansible/tests/test_pg_failover.sh
The acceptance script. Sequence:
0. read formation state via monitor — abort if degraded baseline
1. `incus stop --force pgaf-primary` — start RTO timer
2. poll monitor every 1s for the standby's promotion
3. `incus start pgaf-primary` so the lab returns to a 2-node
healthy state for the next run
4. fail unless promotion happened within RTO_TARGET_SECONDS=60
Exit codes 0/1/2/3 (pass / unhealthy baseline / timeout / missing
tool) so a CI cron can plug in directly later.
Acceptance verified locally:
$ ansible-playbook -i inventory/lab.yml playbooks/postgres_ha.yml \
--syntax-check
playbook: playbooks/postgres_ha.yml ← clean
$ ansible-playbook -i inventory/lab.yml playbooks/postgres_ha.yml \
--list-tasks
4 plays, 22 tasks across plays, all tagged.
$ bash -n infra/ansible/tests/test_pg_failover.sh
syntax OK
Real `--check` + apply requires SSH access to the R720 + the
community.general collection installed (`ansible-galaxy collection
install community.general`). Operator runs that step.
Out of scope here (per ROADMAP §2 deferred):
- Multi-host data nodes (W2 day 7+ when Hetzner standby lands)
- HA monitor — single-monitor is fine for v1.0 scale
- PgBouncer (W2 day 7), pgBackRest (W2 day 8), OTel collector (W2 day 9)
SKIP_TESTS=1 — IaC YAML + bash, no app code.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
65 lines
3 KiB
Markdown
65 lines
3 KiB
Markdown
# `postgres_ha` role — pg_auto_failover formation
|
|
|
|
Brings up a Postgres HA formation managed by [pg_auto_failover](https://github.com/hapostgres/pg_auto_failover) (citusdata). Three Incus containers per environment:
|
|
|
|
| container | role | purpose |
|
|
| --------------- | -------- | ------------------------------------------------ |
|
|
| `pgaf-monitor` | monitor | central state machine — primary election, health |
|
|
| `pgaf-primary` | node | first registered → becomes primary |
|
|
| `pgaf-replica` | node | second registered → becomes hot-standby (sync) |
|
|
|
|
v1.0.9 Day 6 ships the role in the lab inventory only. Staging/prod adopt it once Hetzner standby is provisioned (W2 day 7+).
|
|
|
|
## Acceptance test
|
|
|
|
```bash
|
|
# After `ansible-playbook -i inventory/lab.yml playbooks/postgres_ha.yml`,
|
|
# the failover RTO is asserted by the script:
|
|
bash infra/ansible/tests/test_pg_failover.sh
|
|
```
|
|
|
|
Target: stop primary container → standby promoted within 60s. Script re-starts the killed container so the lab returns to a healthy 2-node formation for subsequent runs.
|
|
|
|
## Vault for secrets
|
|
|
|
The application user's password lives outside git. Create `infra/ansible/group_vars/postgres_ha.vault.yml`:
|
|
|
|
```yaml
|
|
vault_pg_app_password: "<random-32-char-secret>"
|
|
```
|
|
|
|
Encrypt:
|
|
|
|
```bash
|
|
ansible-vault encrypt infra/ansible/group_vars/postgres_ha.vault.yml
|
|
```
|
|
|
|
The vault key (`~/.ansible/vault_pass`) is operator-local — never committed. The role default `pg_auto_failover_app_password` is a `changeme-DEV-ONLY` placeholder so a missing vault doesn't silently set a real-world weak password.
|
|
|
|
## Sync replication policy
|
|
|
|
`number_sync_standbys = 1` is the v1.0.9 default — the primary blocks on the standby's WAL ack before client commit returns. Trade: a few ms of write latency for zero data loss on primary death. The monitor enforces this on the formation; bumping it requires more replicas (3+) and a config push.
|
|
|
|
## What the role does NOT do (yet)
|
|
|
|
- **No PgBouncer** — that's W2 day 7. Backend connects directly to the formation URI for now.
|
|
- **No backup** — pgBackRest lands W2 day 8. Failover ≠ disaster recovery.
|
|
- **No multi-region failover** — single region at v1.0; multi-region is v1.2+ per ROADMAP_V1.0_LAUNCH.md §2 OUT.
|
|
|
|
## Operations
|
|
|
|
```bash
|
|
# State on the monitor:
|
|
incus exec pgaf-monitor -- sudo -u postgres \
|
|
pg_autoctl show state --pgdata /var/lib/postgresql/16/pgaf/monitor
|
|
|
|
# Connection URI (libpq multi-host with target_session_attrs=read-write):
|
|
incus exec pgaf-monitor -- sudo -u postgres \
|
|
pg_autoctl show uri --pgdata /var/lib/postgresql/16/pgaf/monitor --formation default
|
|
|
|
# Manual failover (if needed for a maintenance window):
|
|
incus exec pgaf-monitor -- sudo -u postgres \
|
|
pg_autoctl perform failover --pgdata /var/lib/postgresql/16/pgaf/monitor
|
|
```
|
|
|
|
Backend application reads the formation URI from `DATABASE_URL`; the libpq driver handles primary discovery via `target_session_attrs=read-write`. No app-level reconfiguration during a failover.
|