|
Some checks failed
Veza CI / Frontend (Web) (push) Failing after 16m6s
Veza CI / Notify on failure (push) Successful in 11s
E2E Playwright / e2e (full) (push) Successful in 19m59s
Veza CI / Rust (Stream Server) (push) Successful in 4m57s
Security Scan / Secret Scanning (gitleaks) (push) Successful in 49s
Veza CI / Backend (Go) (push) Successful in 6m4s
ROADMAP_V1.0_LAUNCH.md §Semaine 2 day 8 deliverable:
- Postgres backups land in MinIO via pgbackrest
- dr-drill restores them weekly into an ephemeral Incus container
and asserts the data round-trips
- Prometheus alerts fire when the drill fails OR when the timer
has stopped firing for >8 days
Cadence:
full — weekly (Sun 02:00 UTC, systemd timer)
diff — daily (Mon-Sat 02:00 UTC, systemd timer)
WAL — continuous (postgres archive_command, archive_timeout=60s)
drill — weekly (Sun 04:00 UTC — runs 2h after the Sun full so
the restore exercises fresh data)
RPO ≈ 1 min (archive_timeout). RTO ≤ 30 min (drill measures actual
restore wall-clock).
Files:
infra/ansible/roles/pgbackrest/
defaults/main.yml — repo1-* config (MinIO/S3, path-style,
aes-256-cbc encryption, vault-backed creds), retention 4 full
/ 7 diff / 4 archive cycles, zstd@3 compression. The role's
first task asserts the placeholder secrets are gone — refuses
to apply until the vault carries real keys.
tasks/main.yml — install pgbackrest, render
/etc/pgbackrest/pgbackrest.conf, set archive_command on the
postgres instance via ALTER SYSTEM, detect role at runtime
via `pg_autoctl show state --json`, stanza-create from primary
only, render + enable systemd timers (full + diff + drill).
templates/pgbackrest.conf.j2 — global + per-stanza sections;
pg1-path defaults to the pg_auto_failover state dir so the
role plugs straight into the Day 6 formation.
templates/pgbackrest-{full,diff,drill}.{service,timer}.j2 —
systemd units. Backup services run as `postgres`,
drill service runs as `root` (needs `incus`).
RandomizedDelaySec on every timer to absorb clock skew + node
collision risk.
README.md — RPO/RTO guarantees, vault setup, repo wiring,
operational cheatsheet (info / check / manual backup),
restore procedure documented separately as the dr-drill.
scripts/dr-drill.sh
Acceptance script for the day. Sequence:
0. pre-flight: required tools, latest backup metadata visible
1. launch ephemeral `pg-restore-drill` Incus container
2. install postgres + pgbackrest inside, push the SAME
pgbackrest.conf as the host (read-only against the bucket
by pgbackrest semantics — the same s3 keys get reused so
the drill exercises the production credential path)
3. `pgbackrest restore` — full + WAL replay
4. start postgres, wait for pg_isready
5. smoke query: SELECT count(*) FROM users — must be ≥ MIN_USERS_EXPECTED
6. write veza_backup_drill_* metrics to the textfile-collector
7. teardown (or --keep for postmortem inspection)
Exit codes 0/1/2 (pass / drill failure / env problem) so a
Prometheus runner can plug in directly.
config/prometheus/alert_rules.yml — new `veza_backup` group:
- BackupRestoreDrillFailed (critical, 5m): the last drill
reported success=0. Pages because a backup we haven't proved
restorable is dette technique waiting for a disaster.
- BackupRestoreDrillStale (warning, 1h after >8 days): the
drill timer has stopped firing. Catches a broken cron / unit
/ runner before the failure-mode alert above ever sees data.
Both annotations include a runbook_url stub
(veza.fr/runbooks/...) — those land alongside W2 day 10's
SLO runbook batch.
infra/ansible/playbooks/postgres_ha.yml
Two new plays:
6. apply pgbackrest role to postgres_ha_nodes (install +
config + full/diff timers on every data node;
pgbackrest's repo lock arbitrates collision)
7. install dr-drill on the incus_hosts group (push
/usr/local/bin/dr-drill.sh + render drill timer + ensure
/var/lib/node_exporter/textfile_collector exists)
Acceptance verified locally:
$ ansible-playbook -i inventory/lab.yml playbooks/postgres_ha.yml \
--syntax-check
playbook: playbooks/postgres_ha.yml ← clean
$ python3 -c "import yaml; yaml.safe_load(open('config/prometheus/alert_rules.yml'))"
YAML OK
$ bash -n scripts/dr-drill.sh
syntax OK
Real apply + drill needs the lab R720 + a populated MinIO bucket
+ the secrets in vault — operator's call.
Out of scope (deferred per ROADMAP §2):
- Off-site backup replica (B2 / Bunny.net) — v1.1+
- Logical export pipeline for RGPD per-user dumps — separate
feature track, not a backup-system concern
- PITR admin UI — CLI-only via `--type=time` for v1.0
- pgbackrest_exporter Prometheus integration — W2 day 9
alongside the OTel collector
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|---|---|---|
| .. | ||
| defaults | ||
| handlers | ||
| tasks | ||
| templates | ||
| README.md | ||
pgbackrest role — Postgres backup + WAL archive to MinIO/S3
Wires pgBackRest into the pg_auto_failover formation: full backup weekly, differential daily, WAL continuously to a MinIO bucket. Backups encrypted at rest with aes-256-cbc. The dr-drill script (scripts/dr-drill.sh) restores into an ephemeral Incus container and asserts the data round-trips — runs weekly via systemd timer, exposes a textfile metric for Prometheus.
Cadence
| job | when | schedule (defaults) | metric source |
|---|---|---|---|
| full | weekly Sun 02:00 UTC | pgbackrest_schedule_full |
systemd journald |
| diff | daily Mon-Sat 02:00 | pgbackrest_schedule_diff |
systemd journald |
| WAL | continuous (per file) | postgres archive_command |
postgres logs + pgBackRest |
| drill | weekly Sun 04:00 UTC | pgbackrest_drill_schedule |
textfile collector .prom |
RPO / RTO
- RPO ≈ 1 minute.
archive_timeout=60forces a WAL switch + push every minute even when traffic is low. Worst-case data loss on a primary's death: the last 60s of WAL that hadn't shipped yet. - RTO ≤ 30 min. The dr-drill restore runs end-to-end in ~10-20 min on the lab; production should match given the same backup size.
Vault setup
Three secrets — never committed:
# group_vars/postgres_ha.vault.yml (encrypted)
vault_pgbackrest_s3_key: "<MinIO access key>"
vault_pgbackrest_s3_key_secret: "<MinIO secret key>"
vault_pgbackrest_cipher_pass: "<random 64-char passphrase>"
ansible-vault encrypt infra/ansible/group_vars/postgres_ha.vault.yml
The role's first task asserts the placeholders are gone — applying with the placeholder defaults aborts loud rather than rolling out a misconfigured archive.
Repo wiring
repo1-type = s3
repo1-s3-endpoint = minio.lxd:9000
repo1-s3-bucket = veza-pgbackrest
repo1-s3-uri-style = path # MinIO speaks path-style by default
repo1-cipher-type = aes-256-cbc
Bucket created by the minio_distributed role (W3 day 12). Until then operators bootstrap with:
mc alias set veza-minio https://minio.lxd:9000 <ACCESS> <SECRET>
mc mb veza-minio/veza-pgbackrest
Operations
# Backup status — most recent full + diff + WAL window:
sudo -u postgres pgbackrest --stanza=veza info
# Manual full backup (use sparingly — it's bandwidth-heavy):
sudo systemctl start pgbackrest-full.service
# Tail the most recent backup log:
sudo journalctl -u pgbackrest-full.service -n 200 --no-pager
# Verify the archive pipeline is healthy (last WAL ship time):
sudo -u postgres pgbackrest --stanza=veza check
Restore — the dr-drill
bash scripts/dr-drill.sh
Sequence:
- Read latest backup label via
pgbackrest info - Launch ephemeral Incus container
pg-restore-drill - Install postgres + pgbackrest inside, render the same
pgbackrest.conf(read-only mode against the same bucket) pgbackrest --stanza=veza restoreto recover- Start postgres
- Connect, run
SELECT count(*) FROM users— must be > 0 (proves the seed data round-tripped) - Write
veza_backup_drill_*metrics topgbackrest_drill_metrics_file - Tear down the container (or keep it for inspection if
--keepis passed)
The metrics file is scraped by node_exporter's --collector.textfile.directory. Prometheus alert BackupRestoreDrillFailed (added in config/prometheus/alert_rules.yml) fires when the last successful drill is older than 8 days, OR when the most recent run reported a non-zero exit code.
What this role does NOT cover
- Off-site replica — the bucket is single-region MinIO. v1.1+ adds Bunny.net or B2 as a secondary repo (
repo2-*). - Point-in-time UI — restore is CLI-only via
--type=time. Operator-driven, no admin dashboard. - Logical export — for legal/RGPD requests,
pg_dumpof the relevant rows is a separate path; the binary backups in this role aren't designed to be partially extracted.