Some checks failed
Veza CI / Frontend (Web) (push) Failing after 16m6s
Veza CI / Notify on failure (push) Successful in 11s
E2E Playwright / e2e (full) (push) Successful in 19m59s
Veza CI / Rust (Stream Server) (push) Successful in 4m57s
Security Scan / Secret Scanning (gitleaks) (push) Successful in 49s
Veza CI / Backend (Go) (push) Successful in 6m4s
ROADMAP_V1.0_LAUNCH.md §Semaine 2 day 8 deliverable:
- Postgres backups land in MinIO via pgbackrest
- dr-drill restores them weekly into an ephemeral Incus container
and asserts the data round-trips
- Prometheus alerts fire when the drill fails OR when the timer
has stopped firing for >8 days
Cadence:
full — weekly (Sun 02:00 UTC, systemd timer)
diff — daily (Mon-Sat 02:00 UTC, systemd timer)
WAL — continuous (postgres archive_command, archive_timeout=60s)
drill — weekly (Sun 04:00 UTC — runs 2h after the Sun full so
the restore exercises fresh data)
RPO ≈ 1 min (archive_timeout). RTO ≤ 30 min (drill measures actual
restore wall-clock).
Files:
infra/ansible/roles/pgbackrest/
defaults/main.yml — repo1-* config (MinIO/S3, path-style,
aes-256-cbc encryption, vault-backed creds), retention 4 full
/ 7 diff / 4 archive cycles, zstd@3 compression. The role's
first task asserts the placeholder secrets are gone — refuses
to apply until the vault carries real keys.
tasks/main.yml — install pgbackrest, render
/etc/pgbackrest/pgbackrest.conf, set archive_command on the
postgres instance via ALTER SYSTEM, detect role at runtime
via `pg_autoctl show state --json`, stanza-create from primary
only, render + enable systemd timers (full + diff + drill).
templates/pgbackrest.conf.j2 — global + per-stanza sections;
pg1-path defaults to the pg_auto_failover state dir so the
role plugs straight into the Day 6 formation.
templates/pgbackrest-{full,diff,drill}.{service,timer}.j2 —
systemd units. Backup services run as `postgres`,
drill service runs as `root` (needs `incus`).
RandomizedDelaySec on every timer to absorb clock skew + node
collision risk.
README.md — RPO/RTO guarantees, vault setup, repo wiring,
operational cheatsheet (info / check / manual backup),
restore procedure documented separately as the dr-drill.
scripts/dr-drill.sh
Acceptance script for the day. Sequence:
0. pre-flight: required tools, latest backup metadata visible
1. launch ephemeral `pg-restore-drill` Incus container
2. install postgres + pgbackrest inside, push the SAME
pgbackrest.conf as the host (read-only against the bucket
by pgbackrest semantics — the same s3 keys get reused so
the drill exercises the production credential path)
3. `pgbackrest restore` — full + WAL replay
4. start postgres, wait for pg_isready
5. smoke query: SELECT count(*) FROM users — must be ≥ MIN_USERS_EXPECTED
6. write veza_backup_drill_* metrics to the textfile-collector
7. teardown (or --keep for postmortem inspection)
Exit codes 0/1/2 (pass / drill failure / env problem) so a
Prometheus runner can plug in directly.
config/prometheus/alert_rules.yml — new `veza_backup` group:
- BackupRestoreDrillFailed (critical, 5m): the last drill
reported success=0. Pages because a backup we haven't proved
restorable is dette technique waiting for a disaster.
- BackupRestoreDrillStale (warning, 1h after >8 days): the
drill timer has stopped firing. Catches a broken cron / unit
/ runner before the failure-mode alert above ever sees data.
Both annotations include a runbook_url stub
(veza.fr/runbooks/...) — those land alongside W2 day 10's
SLO runbook batch.
infra/ansible/playbooks/postgres_ha.yml
Two new plays:
6. apply pgbackrest role to postgres_ha_nodes (install +
config + full/diff timers on every data node;
pgbackrest's repo lock arbitrates collision)
7. install dr-drill on the incus_hosts group (push
/usr/local/bin/dr-drill.sh + render drill timer + ensure
/var/lib/node_exporter/textfile_collector exists)
Acceptance verified locally:
$ ansible-playbook -i inventory/lab.yml playbooks/postgres_ha.yml \
--syntax-check
playbook: playbooks/postgres_ha.yml ← clean
$ python3 -c "import yaml; yaml.safe_load(open('config/prometheus/alert_rules.yml'))"
YAML OK
$ bash -n scripts/dr-drill.sh
syntax OK
Real apply + drill needs the lab R720 + a populated MinIO bucket
+ the secrets in vault — operator's call.
Out of scope (deferred per ROADMAP §2):
- Off-site backup replica (B2 / Bunny.net) — v1.1+
- Logical export pipeline for RGPD per-user dumps — separate
feature track, not a backup-system concern
- PITR admin UI — CLI-only via `--type=time` for v1.0
- pgbackrest_exporter Prometheus integration — W2 day 9
alongside the OTel collector
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
132 lines
4.9 KiB
YAML
132 lines
4.9 KiB
YAML
# pgBackRest role — installs pgbackrest, renders the stanza config,
|
|
# wires the archive_command on the data nodes, and schedules the
|
|
# backup + drill timers.
|
|
#
|
|
# Apply ON the postgres data nodes (pgaf-primary, pgaf-replica).
|
|
# The monitor doesn't carry app data and doesn't need a stanza.
|
|
---
|
|
- name: Sanity check — secrets must not be placeholder
|
|
ansible.builtin.assert:
|
|
that:
|
|
- "'CHANGEME-PGBR' not in pgbackrest_repo_s3_key"
|
|
- "'CHANGEME-PGBR' not in pgbackrest_repo_s3_key_secret"
|
|
- "'CHANGEME-PGBR' not in pgbackrest_repo_cipher_pass"
|
|
fail_msg: >
|
|
pgbackrest_repo_s3_key / _secret / cipher_pass still contain
|
|
the CHANGEME placeholder. Provide a vault file
|
|
group_vars/postgres_ha.vault.yml with vault_pgbackrest_s3_key,
|
|
vault_pgbackrest_s3_key_secret, vault_pgbackrest_cipher_pass
|
|
before applying. The role refuses to install with placeholders
|
|
to prevent a live rollout pointing at the wrong S3 keys.
|
|
tags: [pgbackrest, secrets]
|
|
|
|
- name: Install pgBackRest
|
|
ansible.builtin.apt:
|
|
name: pgbackrest
|
|
state: present
|
|
update_cache: true
|
|
cache_valid_time: 3600
|
|
tags: [pgbackrest, packages]
|
|
|
|
- name: Ensure /etc/pgbackrest exists
|
|
ansible.builtin.file:
|
|
path: /etc/pgbackrest
|
|
state: directory
|
|
owner: postgres
|
|
group: postgres
|
|
mode: "0750"
|
|
tags: [pgbackrest, config]
|
|
|
|
- name: Render pgbackrest.conf
|
|
ansible.builtin.template:
|
|
src: pgbackrest.conf.j2
|
|
dest: /etc/pgbackrest/pgbackrest.conf
|
|
owner: postgres
|
|
group: postgres
|
|
mode: "0600"
|
|
tags: [pgbackrest, config]
|
|
|
|
- name: Configure archive_command on the postgres instance
|
|
become: true
|
|
become_user: postgres
|
|
ansible.builtin.shell:
|
|
cmd: |
|
|
psql -h /var/run/postgresql -p {{ pg_auto_failover_node_port | default(5432) }} -U postgres -d postgres <<SQL
|
|
ALTER SYSTEM SET archive_mode = 'on';
|
|
ALTER SYSTEM SET archive_command = 'pgbackrest --stanza={{ pgbackrest_stanza }} archive-push %p';
|
|
ALTER SYSTEM SET archive_timeout = '60';
|
|
SELECT pg_reload_conf();
|
|
SQL
|
|
args:
|
|
executable: /bin/bash
|
|
changed_when: false
|
|
tags: [pgbackrest, postgres]
|
|
|
|
- name: Detect pg_auto_failover role at runtime (primary vs secondary)
|
|
become: true
|
|
become_user: postgres
|
|
ansible.builtin.command:
|
|
cmd: >
|
|
/usr/lib/postgresql/{{ postgres_version }}/bin/pg_autoctl show state
|
|
--pgdata {{ pg_auto_failover_state_dir | default('/var/lib/postgresql/' ~ postgres_version ~ '/pgaf') }}/postgres
|
|
--json
|
|
register: pgaf_state
|
|
changed_when: false
|
|
failed_when: false
|
|
tags: [pgbackrest, init]
|
|
|
|
- name: Set node-role fact from monitor state
|
|
ansible.builtin.set_fact:
|
|
pgaf_role_runtime: >-
|
|
{{ (pgaf_state.stdout | from_json | json_query('[?name==''' ~ inventory_hostname ~ '''].current_state | [0]'))
|
|
| default('unknown') }}
|
|
when: pgaf_state.rc == 0 and pgaf_state.stdout | length > 0
|
|
failed_when: false
|
|
tags: [pgbackrest, init]
|
|
|
|
- name: Stanza-create (only from the primary — pgbackrest takes a repo-wide lock)
|
|
become: true
|
|
become_user: postgres
|
|
ansible.builtin.command:
|
|
cmd: pgbackrest --stanza={{ pgbackrest_stanza }} --log-level-console=info stanza-create
|
|
register: stanza_create
|
|
changed_when: "'stanza already exists' not in (stanza_create.stdout | default(''))"
|
|
failed_when:
|
|
- stanza_create.rc | default(0) != 0
|
|
- "'stanza already exists' not in (stanza_create.stdout | default(''))"
|
|
when: (pgaf_role_runtime | default('unknown')) == 'primary'
|
|
tags: [pgbackrest, init]
|
|
|
|
- name: Render systemd timer + service for full / diff / drill
|
|
ansible.builtin.template:
|
|
src: "{{ item.src }}"
|
|
dest: "{{ item.dest }}"
|
|
owner: root
|
|
group: root
|
|
mode: "0644"
|
|
loop:
|
|
- { src: pgbackrest-full.service.j2, dest: /etc/systemd/system/pgbackrest-full.service }
|
|
- { src: pgbackrest-full.timer.j2, dest: /etc/systemd/system/pgbackrest-full.timer }
|
|
- { src: pgbackrest-diff.service.j2, dest: /etc/systemd/system/pgbackrest-diff.service }
|
|
- { src: pgbackrest-diff.timer.j2, dest: /etc/systemd/system/pgbackrest-diff.timer }
|
|
- { src: pgbackrest-drill.service.j2, dest: /etc/systemd/system/pgbackrest-drill.service }
|
|
- { src: pgbackrest-drill.timer.j2, dest: /etc/systemd/system/pgbackrest-drill.timer }
|
|
notify: Reload systemd
|
|
tags: [pgbackrest, schedule]
|
|
|
|
- name: Enable + start backup timers on all data nodes
|
|
ansible.builtin.systemd:
|
|
name: "{{ item }}"
|
|
state: started
|
|
enabled: true
|
|
daemon_reload: true
|
|
loop:
|
|
- pgbackrest-full.timer
|
|
- pgbackrest-diff.timer
|
|
# Enabled on every data node — pgbackrest itself takes a
|
|
# repository-wide lock on backup start, so the two nodes can't
|
|
# both run a full backup concurrently. The randomized delay (300s)
|
|
# in the timer cushions clock skew. After failover, the new
|
|
# primary picks up the schedule on the next interval; no manual
|
|
# reconfiguration needed.
|
|
tags: [pgbackrest, schedule]
|