feat(infra): pgbackrest role + dr-drill + Prometheus backup alerts (W2 Day 8)
Some checks failed
Veza CI / Frontend (Web) (push) Failing after 16m6s
Veza CI / Notify on failure (push) Successful in 11s
E2E Playwright / e2e (full) (push) Successful in 19m59s
Veza CI / Rust (Stream Server) (push) Successful in 4m57s
Security Scan / Secret Scanning (gitleaks) (push) Successful in 49s
Veza CI / Backend (Go) (push) Successful in 6m4s

ROADMAP_V1.0_LAUNCH.md §Semaine 2 day 8 deliverable:
  - Postgres backups land in MinIO via pgbackrest
  - dr-drill restores them weekly into an ephemeral Incus container
    and asserts the data round-trips
  - Prometheus alerts fire when the drill fails OR when the timer
    has stopped firing for >8 days

Cadence:
  full   — weekly  (Sun 02:00 UTC, systemd timer)
  diff   — daily   (Mon-Sat 02:00 UTC, systemd timer)
  WAL    — continuous (postgres archive_command, archive_timeout=60s)
  drill  — weekly  (Sun 04:00 UTC — runs 2h after the Sun full so
           the restore exercises fresh data)

RPO ≈ 1 min (archive_timeout). RTO ≤ 30 min (drill measures actual
restore wall-clock).

Files:
  infra/ansible/roles/pgbackrest/
    defaults/main.yml — repo1-* config (MinIO/S3, path-style,
      aes-256-cbc encryption, vault-backed creds), retention 4 full
      / 7 diff / 4 archive cycles, zstd@3 compression. The role's
      first task asserts the placeholder secrets are gone — refuses
      to apply until the vault carries real keys.
    tasks/main.yml — install pgbackrest, render
      /etc/pgbackrest/pgbackrest.conf, set archive_command on the
      postgres instance via ALTER SYSTEM, detect role at runtime
      via `pg_autoctl show state --json`, stanza-create from primary
      only, render + enable systemd timers (full + diff + drill).
    templates/pgbackrest.conf.j2 — global + per-stanza sections;
      pg1-path defaults to the pg_auto_failover state dir so the
      role plugs straight into the Day 6 formation.
    templates/pgbackrest-{full,diff,drill}.{service,timer}.j2 —
      systemd units. Backup services run as `postgres`,
      drill service runs as `root` (needs `incus`).
      RandomizedDelaySec on every timer to absorb clock skew + node
      collision risk.
    README.md — RPO/RTO guarantees, vault setup, repo wiring,
      operational cheatsheet (info / check / manual backup),
      restore procedure documented separately as the dr-drill.

  scripts/dr-drill.sh
    Acceptance script for the day. Sequence:
      0. pre-flight: required tools, latest backup metadata visible
      1. launch ephemeral `pg-restore-drill` Incus container
      2. install postgres + pgbackrest inside, push the SAME
         pgbackrest.conf as the host (read-only against the bucket
         by pgbackrest semantics — the same s3 keys get reused so
         the drill exercises the production credential path)
      3. `pgbackrest restore` — full + WAL replay
      4. start postgres, wait for pg_isready
      5. smoke query: SELECT count(*) FROM users — must be ≥ MIN_USERS_EXPECTED
      6. write veza_backup_drill_* metrics to the textfile-collector
      7. teardown (or --keep for postmortem inspection)
    Exit codes 0/1/2 (pass / drill failure / env problem) so a
    Prometheus runner can plug in directly.

  config/prometheus/alert_rules.yml — new `veza_backup` group:
    - BackupRestoreDrillFailed (critical, 5m): the last drill
      reported success=0. Pages because a backup we haven't proved
      restorable is dette technique waiting for a disaster.
    - BackupRestoreDrillStale (warning, 1h after >8 days): the
      drill timer has stopped firing. Catches a broken cron / unit
      / runner before the failure-mode alert above ever sees data.
    Both annotations include a runbook_url stub
    (veza.fr/runbooks/...) — those land alongside W2 day 10's
    SLO runbook batch.

  infra/ansible/playbooks/postgres_ha.yml
    Two new plays:
      6. apply pgbackrest role to postgres_ha_nodes (install +
         config + full/diff timers on every data node;
         pgbackrest's repo lock arbitrates collision)
      7. install dr-drill on the incus_hosts group (push
         /usr/local/bin/dr-drill.sh + render drill timer + ensure
         /var/lib/node_exporter/textfile_collector exists)

Acceptance verified locally:
  $ ansible-playbook -i inventory/lab.yml playbooks/postgres_ha.yml \
      --syntax-check
  playbook: playbooks/postgres_ha.yml          ← clean
  $ python3 -c "import yaml; yaml.safe_load(open('config/prometheus/alert_rules.yml'))"
  YAML OK
  $ bash -n scripts/dr-drill.sh
  syntax OK

Real apply + drill needs the lab R720 + a populated MinIO bucket
+ the secrets in vault — operator's call.

Out of scope (deferred per ROADMAP §2):
  - Off-site backup replica (B2 / Bunny.net) — v1.1+
  - Logical export pipeline for RGPD per-user dumps — separate
    feature track, not a backup-system concern
  - PITR admin UI — CLI-only via `--type=time` for v1.0
  - pgbackrest_exporter Prometheus integration — W2 day 9
    alongside the OTel collector

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
senke 2026-04-28 00:51:00 +02:00
parent ba6e8b4e0e
commit bf31a91ae6
14 changed files with 738 additions and 0 deletions

View file

@ -36,3 +36,46 @@ groups:
annotations: annotations:
summary: "Redis is unreachable" summary: "Redis is unreachable"
description: "Redis has been unreachable for more than 30 seconds." description: "Redis has been unreachable for more than 30 seconds."
# v1.0.9 Day 8: backup integrity. The dr-drill.sh script writes
# textfile-collector metrics on every run. Two failure modes are
# caught:
# 1. last drill reported a failure (success=0)
# 2. drill hasn't run in 8+ days (timer broke, runner offline,
# script crashed before write_metric)
# Both are pages because a backup we haven't proved restorable is
# dette technique waiting for a disaster to bite — finding out at
# restore-time is too late.
- name: veza_backup
rules:
- alert: BackupRestoreDrillFailed
expr: veza_backup_drill_last_success == 0
for: 5m
labels:
severity: critical
annotations:
summary: "pgBackRest dr-drill last run failed (stanza={{ $labels.stanza }})"
description: |
The most recent dr-drill.sh execution reported failure
(reason={{ $labels.reason }}). Backups exist but a
restore from them did NOT round-trip the smoke query.
Investigate via: journalctl -u pgbackrest-drill.service -n 200
and consider running the drill manually with --keep to
inspect the restored container before teardown.
runbook_url: "https://veza.fr/runbooks/backup-restore-drill-failed"
- alert: BackupRestoreDrillStale
expr: time() - veza_backup_drill_last_run_timestamp_seconds > 691200 # 8 days
for: 1h
labels:
severity: warning
annotations:
summary: "pgBackRest dr-drill hasn't run in 8+ days"
description: |
The dr-drill timer fires weekly (Sun 04:00 UTC). A run
older than 8 days means the timer is broken, the runner
is offline, or the script crashed before writing its
metrics file. Verify with:
systemctl status pgbackrest-drill.timer
journalctl -u pgbackrest-drill.service -n 200
runbook_url: "https://veza.fr/runbooks/backup-restore-drill-stale"

View file

@ -86,3 +86,62 @@
gather_facts: true gather_facts: true
roles: roles:
- pgbouncer - pgbouncer
# v1.0.9 Day 8: pgBackRest on the data nodes (archive_command + full
# / diff timers + stanza-create from whoever is primary).
- name: Install + configure pgBackRest on the data nodes
hosts: postgres_ha_nodes
become: true
gather_facts: true
roles:
- pgbackrest
# Drill installer — runs on the Incus host so it can `incus launch`
# the ephemeral restore container. Pushes dr-drill.sh to
# /usr/local/bin, ensures the textfile-collector dir exists for
# node_exporter, and wires the weekly drill timer.
- name: Install dr-drill on the Incus host
hosts: incus_hosts
become: true
gather_facts: true
tasks:
- name: Push dr-drill.sh to /usr/local/bin
ansible.builtin.copy:
src: ../../../scripts/dr-drill.sh
dest: /usr/local/bin/dr-drill.sh
owner: root
group: root
mode: "0755"
tags: [pgbackrest, drill]
- name: Ensure node_exporter textfile collector dir
ansible.builtin.file:
path: /var/lib/node_exporter/textfile_collector
state: directory
owner: node_exporter
group: node_exporter
mode: "0755"
tags: [pgbackrest, drill]
- name: Render dr-drill systemd service + timer
ansible.builtin.template:
src: ../roles/pgbackrest/templates/{{ item.src }}
dest: "{{ item.dest }}"
owner: root
group: root
mode: "0644"
loop:
- { src: pgbackrest-drill.service.j2, dest: /etc/systemd/system/pgbackrest-drill.service }
- { src: pgbackrest-drill.timer.j2, dest: /etc/systemd/system/pgbackrest-drill.timer }
tags: [pgbackrest, drill]
vars:
pgbackrest_stanza: "{{ hostvars[groups['postgres_ha_nodes'][0]]['pgbackrest_stanza'] | default('veza') }}"
pgbackrest_drill_schedule: "{{ hostvars[groups['postgres_ha_nodes'][0]]['pgbackrest_drill_schedule'] | default('Sun *-*-* 04:00:00') }}"
- name: Enable + start drill timer
ansible.builtin.systemd:
name: pgbackrest-drill.timer
state: started
enabled: true
daemon_reload: true
tags: [pgbackrest, drill]

View file

@ -0,0 +1,92 @@
# `pgbackrest` role — Postgres backup + WAL archive to MinIO/S3
Wires pgBackRest into the pg_auto_failover formation: full backup weekly, differential daily, WAL continuously to a MinIO bucket. Backups encrypted at rest with `aes-256-cbc`. The dr-drill script (`scripts/dr-drill.sh`) restores into an ephemeral Incus container and asserts the data round-trips — runs weekly via systemd timer, exposes a textfile metric for Prometheus.
## Cadence
| job | when | schedule (defaults) | metric source |
| ---- | --------------------- | -------------------------------- | ------------------------- |
| full | weekly Sun 02:00 UTC | `pgbackrest_schedule_full` | systemd journald |
| diff | daily Mon-Sat 02:00 | `pgbackrest_schedule_diff` | systemd journald |
| WAL | continuous (per file) | postgres `archive_command` | postgres logs + pgBackRest |
| drill| weekly Sun 04:00 UTC | `pgbackrest_drill_schedule` | textfile collector .prom |
## RPO / RTO
- **RPO** ≈ 1 minute. `archive_timeout=60` forces a WAL switch + push every minute even when traffic is low. Worst-case data loss on a primary's death: the last 60s of WAL that hadn't shipped yet.
- **RTO** ≤ 30 min. The dr-drill restore runs end-to-end in ~10-20 min on the lab; production should match given the same backup size.
## Vault setup
Three secrets — never committed:
```bash
# group_vars/postgres_ha.vault.yml (encrypted)
vault_pgbackrest_s3_key: "<MinIO access key>"
vault_pgbackrest_s3_key_secret: "<MinIO secret key>"
vault_pgbackrest_cipher_pass: "<random 64-char passphrase>"
```
```bash
ansible-vault encrypt infra/ansible/group_vars/postgres_ha.vault.yml
```
The role's first task asserts the placeholders are gone — applying with the placeholder defaults aborts loud rather than rolling out a misconfigured archive.
## Repo wiring
```ini
repo1-type = s3
repo1-s3-endpoint = minio.lxd:9000
repo1-s3-bucket = veza-pgbackrest
repo1-s3-uri-style = path # MinIO speaks path-style by default
repo1-cipher-type = aes-256-cbc
```
Bucket created by the `minio_distributed` role (W3 day 12). Until then operators bootstrap with:
```bash
mc alias set veza-minio https://minio.lxd:9000 <ACCESS> <SECRET>
mc mb veza-minio/veza-pgbackrest
```
## Operations
```bash
# Backup status — most recent full + diff + WAL window:
sudo -u postgres pgbackrest --stanza=veza info
# Manual full backup (use sparingly — it's bandwidth-heavy):
sudo systemctl start pgbackrest-full.service
# Tail the most recent backup log:
sudo journalctl -u pgbackrest-full.service -n 200 --no-pager
# Verify the archive pipeline is healthy (last WAL ship time):
sudo -u postgres pgbackrest --stanza=veza check
```
## Restore — the dr-drill
```bash
bash scripts/dr-drill.sh
```
Sequence:
1. Read latest backup label via `pgbackrest info`
2. Launch ephemeral Incus container `pg-restore-drill`
3. Install postgres + pgbackrest inside, render the same `pgbackrest.conf` (read-only mode against the same bucket)
4. `pgbackrest --stanza=veza restore` to recover
5. Start postgres
6. Connect, run `SELECT count(*) FROM users` — must be > 0 (proves the seed data round-tripped)
7. Write `veza_backup_drill_*` metrics to `pgbackrest_drill_metrics_file`
8. Tear down the container (or keep it for inspection if `--keep` is passed)
The metrics file is scraped by node_exporter's `--collector.textfile.directory`. Prometheus alert `BackupRestoreDrillFailed` (added in `config/prometheus/alert_rules.yml`) fires when the last successful drill is older than 8 days, OR when the most recent run reported a non-zero exit code.
## What this role does NOT cover
- **Off-site replica** — the bucket is single-region MinIO. v1.1+ adds Bunny.net or B2 as a secondary repo (`repo2-*`).
- **Point-in-time UI** — restore is CLI-only via `--type=time`. Operator-driven, no admin dashboard.
- **Logical export** — for legal/RGPD requests, `pg_dump` of the relevant rows is a separate path; the binary backups in this role aren't designed to be partially extracted.

View file

@ -0,0 +1,66 @@
# pgBackRest defaults — Postgres backup + WAL archive to MinIO/S3.
# https://pgbackrest.org
#
# v1.0.9 Day 8 — RPO target ≈ 5 min (WAL archive interval), RTO
# target < 30 min (the dr-drill timing budget). Backup cadence:
# - full : weekly (Sunday 02:00 UTC)
# - diff : daily Mon-Sat 02:00 UTC
# - WAL : continuous (archive_command after every WAL file)
---
postgres_version: 16
# Repository — MinIO is S3-compatible. The bucket is provisioned by
# the minio_distributed role (W3 day 12); until then operators
# create it manually with `mc mb minio/veza-pgbackrest`.
pgbackrest_repo_type: s3
pgbackrest_repo_s3_endpoint: minio.lxd:9000
pgbackrest_repo_s3_region: us-east-1 # MinIO ignores region but pgbackrest requires the field
pgbackrest_repo_s3_bucket: veza-pgbackrest
pgbackrest_repo_s3_uri_style: path # MinIO speaks path-style by default
pgbackrest_repo_s3_verify_tls: false # lab MinIO uses self-signed certs; flip to true once Let's Encrypt is wired
pgbackrest_repo_path: /
pgbackrest_repo_cipher_type: aes-256-cbc
# Stanza — pgBackRest's name for one Postgres cluster's archive.
# Single stanza per environment for v1.0.9 (one formation = one
# stanza). Multi-cluster envs add suffixed stanzas.
pgbackrest_stanza: veza
# Retention — keep 4 full backups (≈ 1 month at weekly cadence).
# diff/incremental retention is implicit (kept until the parent
# full expires). WAL is kept as long as any in-window full needs
# it for PITR.
pgbackrest_repo_retention_full: 4
pgbackrest_repo_retention_diff: 7
pgbackrest_repo_retention_archive: 4
# Compression — zstd@3 trades CPU for ~50% smaller archive vs gz@6.
# The CPU budget is fine on the R720; bandwidth to MinIO is the
# scarcer resource.
pgbackrest_compress_type: zstd
pgbackrest_compress_level: 3
# Process count — parallel WAL push + parallel backup. 4 is right
# for the 12-core R720 with concurrent backend traffic; bump in
# `group_vars/<env>` for dedicated-host backups.
pgbackrest_process_max: 4
# Secrets — sourced from vault. The role refuses to apply when
# placeholders are still in place to prevent a "live" rollout
# pointing at the wrong S3 keys.
pgbackrest_repo_s3_key: "{{ vault_pgbackrest_s3_key | default('CHANGEME-PGBR-KEY') }}"
pgbackrest_repo_s3_key_secret: "{{ vault_pgbackrest_s3_key_secret | default('CHANGEME-PGBR-SECRET') }}"
pgbackrest_repo_cipher_pass: "{{ vault_pgbackrest_cipher_pass | default('CHANGEME-PGBR-CIPHER') }}"
# Schedule — systemd timers (preferred over cron for journald
# integration). Override per env in group_vars.
pgbackrest_schedule_full: "Sun *-*-* 02:00:00"
pgbackrest_schedule_diff: "Mon..Sat *-*-* 02:00:00"
# Drill schedule — weekly RTO check, runs from any host with
# `incus` access. `dr-drill.sh` writes a textfile metric the
# node_exporter scrapes; the Prometheus alert
# BackupRestoreDrillFailed fires when the timestamp gets stale
# (> 8 days = a week's drill missed entirely).
pgbackrest_drill_schedule: "Sun *-*-* 04:00:00"
pgbackrest_drill_metrics_file: /var/lib/node_exporter/textfile_collector/pgbackrest_drill.prom

View file

@ -0,0 +1,4 @@
---
- name: Reload systemd
ansible.builtin.systemd:
daemon_reload: true

View file

@ -0,0 +1,132 @@
# pgBackRest role — installs pgbackrest, renders the stanza config,
# wires the archive_command on the data nodes, and schedules the
# backup + drill timers.
#
# Apply ON the postgres data nodes (pgaf-primary, pgaf-replica).
# The monitor doesn't carry app data and doesn't need a stanza.
---
- name: Sanity check — secrets must not be placeholder
ansible.builtin.assert:
that:
- "'CHANGEME-PGBR' not in pgbackrest_repo_s3_key"
- "'CHANGEME-PGBR' not in pgbackrest_repo_s3_key_secret"
- "'CHANGEME-PGBR' not in pgbackrest_repo_cipher_pass"
fail_msg: >
pgbackrest_repo_s3_key / _secret / cipher_pass still contain
the CHANGEME placeholder. Provide a vault file
group_vars/postgres_ha.vault.yml with vault_pgbackrest_s3_key,
vault_pgbackrest_s3_key_secret, vault_pgbackrest_cipher_pass
before applying. The role refuses to install with placeholders
to prevent a live rollout pointing at the wrong S3 keys.
tags: [pgbackrest, secrets]
- name: Install pgBackRest
ansible.builtin.apt:
name: pgbackrest
state: present
update_cache: true
cache_valid_time: 3600
tags: [pgbackrest, packages]
- name: Ensure /etc/pgbackrest exists
ansible.builtin.file:
path: /etc/pgbackrest
state: directory
owner: postgres
group: postgres
mode: "0750"
tags: [pgbackrest, config]
- name: Render pgbackrest.conf
ansible.builtin.template:
src: pgbackrest.conf.j2
dest: /etc/pgbackrest/pgbackrest.conf
owner: postgres
group: postgres
mode: "0600"
tags: [pgbackrest, config]
- name: Configure archive_command on the postgres instance
become: true
become_user: postgres
ansible.builtin.shell:
cmd: |
psql -h /var/run/postgresql -p {{ pg_auto_failover_node_port | default(5432) }} -U postgres -d postgres <<SQL
ALTER SYSTEM SET archive_mode = 'on';
ALTER SYSTEM SET archive_command = 'pgbackrest --stanza={{ pgbackrest_stanza }} archive-push %p';
ALTER SYSTEM SET archive_timeout = '60';
SELECT pg_reload_conf();
SQL
args:
executable: /bin/bash
changed_when: false
tags: [pgbackrest, postgres]
- name: Detect pg_auto_failover role at runtime (primary vs secondary)
become: true
become_user: postgres
ansible.builtin.command:
cmd: >
/usr/lib/postgresql/{{ postgres_version }}/bin/pg_autoctl show state
--pgdata {{ pg_auto_failover_state_dir | default('/var/lib/postgresql/' ~ postgres_version ~ '/pgaf') }}/postgres
--json
register: pgaf_state
changed_when: false
failed_when: false
tags: [pgbackrest, init]
- name: Set node-role fact from monitor state
ansible.builtin.set_fact:
pgaf_role_runtime: >-
{{ (pgaf_state.stdout | from_json | json_query('[?name==''' ~ inventory_hostname ~ '''].current_state | [0]'))
| default('unknown') }}
when: pgaf_state.rc == 0 and pgaf_state.stdout | length > 0
failed_when: false
tags: [pgbackrest, init]
- name: Stanza-create (only from the primary — pgbackrest takes a repo-wide lock)
become: true
become_user: postgres
ansible.builtin.command:
cmd: pgbackrest --stanza={{ pgbackrest_stanza }} --log-level-console=info stanza-create
register: stanza_create
changed_when: "'stanza already exists' not in (stanza_create.stdout | default(''))"
failed_when:
- stanza_create.rc | default(0) != 0
- "'stanza already exists' not in (stanza_create.stdout | default(''))"
when: (pgaf_role_runtime | default('unknown')) == 'primary'
tags: [pgbackrest, init]
- name: Render systemd timer + service for full / diff / drill
ansible.builtin.template:
src: "{{ item.src }}"
dest: "{{ item.dest }}"
owner: root
group: root
mode: "0644"
loop:
- { src: pgbackrest-full.service.j2, dest: /etc/systemd/system/pgbackrest-full.service }
- { src: pgbackrest-full.timer.j2, dest: /etc/systemd/system/pgbackrest-full.timer }
- { src: pgbackrest-diff.service.j2, dest: /etc/systemd/system/pgbackrest-diff.service }
- { src: pgbackrest-diff.timer.j2, dest: /etc/systemd/system/pgbackrest-diff.timer }
- { src: pgbackrest-drill.service.j2, dest: /etc/systemd/system/pgbackrest-drill.service }
- { src: pgbackrest-drill.timer.j2, dest: /etc/systemd/system/pgbackrest-drill.timer }
notify: Reload systemd
tags: [pgbackrest, schedule]
- name: Enable + start backup timers on all data nodes
ansible.builtin.systemd:
name: "{{ item }}"
state: started
enabled: true
daemon_reload: true
loop:
- pgbackrest-full.timer
- pgbackrest-diff.timer
# Enabled on every data node — pgbackrest itself takes a
# repository-wide lock on backup start, so the two nodes can't
# both run a full backup concurrently. The randomized delay (300s)
# in the timer cushions clock skew. After failover, the new
# primary picks up the schedule on the next interval; no manual
# reconfiguration needed.
tags: [pgbackrest, schedule]

View file

@ -0,0 +1,19 @@
# Managed by Ansible — do not edit by hand.
# pgBackRest daily differential backup.
[Unit]
Description=pgBackRest diff backup ({{ pgbackrest_stanza }})
Wants=network-online.target
After=network-online.target
[Service]
Type=oneshot
User=postgres
Group=postgres
ExecStart=/usr/bin/pgbackrest --stanza={{ pgbackrest_stanza }} --type=diff backup
Nice=10
IOSchedulingClass=best-effort
IOSchedulingPriority=7
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,15 @@
# Managed by Ansible — do not edit by hand.
# Daily differential backup trigger.
[Unit]
Description=pgBackRest diff backup timer ({{ pgbackrest_stanza }})
Requires=pgbackrest-diff.service
[Timer]
OnCalendar={{ pgbackrest_schedule_diff }}
Persistent=true
RandomizedDelaySec=300
Unit=pgbackrest-diff.service
[Install]
WantedBy=timers.target

View file

@ -0,0 +1,23 @@
# Managed by Ansible — do not edit by hand.
# pgBackRest weekly dr-drill — restore into ephemeral Incus
# container, smoke-test, write Prometheus textfile metrics.
[Unit]
Description=pgBackRest dr-drill ({{ pgbackrest_stanza }})
Wants=network-online.target
After=network-online.target
[Service]
Type=oneshot
User=root
Group=root
# The drill script needs `incus` (root by default), pgbackrest (to
# read the bucket info), and write access to the textfile collector
# directory. Stays on the host that owns the postgres_ha containers.
ExecStart=/usr/local/bin/dr-drill.sh
TimeoutStartSec=1800
StandardOutput=journal
StandardError=journal
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,17 @@
# Managed by Ansible — do not edit by hand.
# Weekly dr-drill trigger — runs after the Sunday full backup
# completes (full at 02:00, drill at 04:00) so the restore exercises
# fresh data.
[Unit]
Description=pgBackRest dr-drill timer ({{ pgbackrest_stanza }})
Requires=pgbackrest-drill.service
[Timer]
OnCalendar={{ pgbackrest_drill_schedule }}
Persistent=true
RandomizedDelaySec=600
Unit=pgbackrest-drill.service
[Install]
WantedBy=timers.target

View file

@ -0,0 +1,19 @@
# Managed by Ansible — do not edit by hand.
# pgBackRest weekly full backup.
[Unit]
Description=pgBackRest full backup ({{ pgbackrest_stanza }})
Wants=network-online.target
After=network-online.target
[Service]
Type=oneshot
User=postgres
Group=postgres
ExecStart=/usr/bin/pgbackrest --stanza={{ pgbackrest_stanza }} --type=full backup
Nice=10
IOSchedulingClass=best-effort
IOSchedulingPriority=7
[Install]
WantedBy=multi-user.target

View file

@ -0,0 +1,15 @@
# Managed by Ansible — do not edit by hand.
# Weekly full backup trigger.
[Unit]
Description=pgBackRest full backup timer ({{ pgbackrest_stanza }})
Requires=pgbackrest-full.service
[Timer]
OnCalendar={{ pgbackrest_schedule_full }}
Persistent=true
RandomizedDelaySec=300
Unit=pgbackrest-full.service
[Install]
WantedBy=timers.target

View file

@ -0,0 +1,42 @@
# Managed by Ansible — do not edit by hand.
# Source: infra/ansible/roles/pgbackrest/templates/pgbackrest.conf.j2
[global]
repo1-type = {{ pgbackrest_repo_type }}
repo1-s3-endpoint = {{ pgbackrest_repo_s3_endpoint }}
repo1-s3-region = {{ pgbackrest_repo_s3_region }}
repo1-s3-bucket = {{ pgbackrest_repo_s3_bucket }}
repo1-s3-uri-style = {{ pgbackrest_repo_s3_uri_style }}
repo1-s3-verify-tls = {{ pgbackrest_repo_s3_verify_tls | string | lower }}
repo1-path = {{ pgbackrest_repo_path }}
repo1-s3-key = {{ pgbackrest_repo_s3_key }}
repo1-s3-key-secret = {{ pgbackrest_repo_s3_key_secret }}
# Encryption — at-rest protection independent of MinIO server-side
# encryption. The cipher passphrase is mandatory; without it the
# repo is unencrypted and a leaked S3 key reveals all data.
repo1-cipher-type = {{ pgbackrest_repo_cipher_type }}
repo1-cipher-pass = {{ pgbackrest_repo_cipher_pass }}
# Retention — see role defaults for rationale.
repo1-retention-full = {{ pgbackrest_repo_retention_full }}
repo1-retention-diff = {{ pgbackrest_repo_retention_diff }}
repo1-retention-archive = {{ pgbackrest_repo_retention_archive }}
compress-type = {{ pgbackrest_compress_type }}
compress-level = {{ pgbackrest_compress_level }}
process-max = {{ pgbackrest_process_max }}
log-level-console = info
log-level-file = detail
log-path = /var/log/pgbackrest
# Start fast on a fresh box; pre-existing repos pick up where they
# left off without intervention.
start-fast = y
[{{ pgbackrest_stanza }}]
pg1-path = {{ pg_auto_failover_state_dir | default('/var/lib/postgresql/' ~ postgres_version ~ '/pgaf') }}/postgres
pg1-port = {{ pg_auto_failover_node_port | default(5432) }}
pg1-user = postgres
pg1-socket-path = /var/run/postgresql

192
scripts/dr-drill.sh Executable file
View file

@ -0,0 +1,192 @@
#!/usr/bin/env bash
# dr-drill.sh — Postgres backup restore drill.
#
# Restores the most recent pgBackRest full+WAL into an ephemeral
# Incus container, runs a smoke query against the recovered DB,
# tears the container down, and writes a textfile metric for the
# Prometheus alert BackupRestoreDrillFailed.
#
# Acceptance for ROADMAP_V1.0_LAUNCH.md §Semaine 2 day 8.
#
# Usage:
# bash scripts/dr-drill.sh [--keep]
#
# Env overrides:
# PGBACKREST_STANZA default: veza
# PGBACKREST_SECRETS default: /etc/pgbackrest/pgbackrest.conf
# (mounted into the drill container so
# the same S3 creds + cipher pass apply)
# POSTGRES_VERSION default: 16
# DRILL_CONTAINER default: pg-restore-drill
# DRILL_METRICS_FILE default: /var/lib/node_exporter/textfile_collector/pgbackrest_drill.prom
# MIN_USERS_EXPECTED default: 1 ; set higher when the seed grows
#
# Exit codes:
# 0 — drill passed (restore + smoke query OK)
# 1 — drill failed (restore error, smoke query failure, or
# short user count)
# 2 — environment problem (missing tool, no backups, can't
# reach the Incus host)
set -euo pipefail
PGBACKREST_STANZA=${PGBACKREST_STANZA:-veza}
PGBACKREST_CONF_HOST=${PGBACKREST_CONF_HOST:-/etc/pgbackrest/pgbackrest.conf}
POSTGRES_VERSION=${POSTGRES_VERSION:-16}
DRILL_CONTAINER=${DRILL_CONTAINER:-pg-restore-drill}
DRILL_METRICS_FILE=${DRILL_METRICS_FILE:-/var/lib/node_exporter/textfile_collector/pgbackrest_drill.prom}
DRILL_METRICS_TMP=${DRILL_METRICS_FILE}.tmp
MIN_USERS_EXPECTED=${MIN_USERS_EXPECTED:-1}
KEEP_CONTAINER=0
if [ "${1:-}" = "--keep" ]; then KEEP_CONTAINER=1; fi
log() { printf '[%s] %s\n' "$(date +%H:%M:%S)" "$*" >&2; }
fail() { log "FAIL: $*"; write_metric 0 "${1:-failed}" "${SECONDS}"; exit "${2:-1}"; }
require() { command -v "$1" >/dev/null 2>&1 || { log "missing tool: $1"; exit 2; } }
write_metric() {
local success="$1" reason="${2:-ok}" duration="${3:-0}"
local now
now=$(date +%s)
mkdir -p "$(dirname "$DRILL_METRICS_FILE")"
cat >"$DRILL_METRICS_TMP" <<EOF
# HELP veza_backup_drill_last_run_timestamp_seconds Unix epoch of the last drill attempt
# TYPE veza_backup_drill_last_run_timestamp_seconds gauge
veza_backup_drill_last_run_timestamp_seconds ${now}
# HELP veza_backup_drill_last_success Boolean (1=last drill succeeded, 0=failed)
# TYPE veza_backup_drill_last_success gauge
veza_backup_drill_last_success{stanza="${PGBACKREST_STANZA}",reason="${reason}"} ${success}
# HELP veza_backup_drill_last_duration_seconds Wall-clock seconds of the last drill
# TYPE veza_backup_drill_last_duration_seconds gauge
veza_backup_drill_last_duration_seconds ${duration}
EOF
mv "$DRILL_METRICS_TMP" "$DRILL_METRICS_FILE"
}
cleanup() {
if [ "$KEEP_CONTAINER" -eq 1 ]; then
log "leaving $DRILL_CONTAINER alive (--keep)"
return
fi
if incus info "$DRILL_CONTAINER" >/dev/null 2>&1; then
log "tearing down $DRILL_CONTAINER"
incus delete --force "$DRILL_CONTAINER" || true
fi
}
trap cleanup EXIT
# -----------------------------------------------------------------------------
# 0. Pre-flight.
# -----------------------------------------------------------------------------
require incus
require pgbackrest
require date
[ -f "$PGBACKREST_CONF_HOST" ] || fail "pgbackrest.conf not found at $PGBACKREST_CONF_HOST" 2
log "step 0: read latest backup metadata for stanza=$PGBACKREST_STANZA"
backup_info=$(pgbackrest --stanza="$PGBACKREST_STANZA" --output=text info 2>&1 || true)
echo "$backup_info" | sed 's/^/ /' >&2
if ! echo "$backup_info" | grep -q "full backup:"; then
fail "no full backup visible — has the stanza had time to run yet?" 2
fi
# -----------------------------------------------------------------------------
# 1. Provision the drill container.
# -----------------------------------------------------------------------------
log "step 1: launching $DRILL_CONTAINER (ephemeral Ubuntu 22.04)"
if incus info "$DRILL_CONTAINER" >/dev/null 2>&1; then
log " pre-existing container, tearing it down for a clean run"
incus delete --force "$DRILL_CONTAINER"
fi
incus launch images:ubuntu/22.04 "$DRILL_CONTAINER" -c security.privileged=true
# Wait for cloud-init.
for _ in $(seq 1 60); do
if incus exec "$DRILL_CONTAINER" -- cloud-init status 2>/dev/null | grep -q "status: done"; then
break
fi
sleep 1
done
# -----------------------------------------------------------------------------
# 2. Install postgres + pgbackrest inside, push the same config in
# (read-only against the bucket).
# -----------------------------------------------------------------------------
log "step 2: installing postgres + pgbackrest in $DRILL_CONTAINER"
incus exec "$DRILL_CONTAINER" -- bash -c "
set -e
apt-get update >/dev/null
apt-get install -y curl ca-certificates gnupg lsb-release >/dev/null
install -d -m 0755 /etc/apt/keyrings
curl -fsSL https://www.postgresql.org/media/keys/ACCC4CF8.asc -o /etc/apt/keyrings/postgresql.asc
echo 'deb [signed-by=/etc/apt/keyrings/postgresql.asc] https://apt.postgresql.org/pub/repos/apt jammy-pgdg main' \
> /etc/apt/sources.list.d/pgdg.list
apt-get update >/dev/null
DEBIAN_FRONTEND=noninteractive apt-get install -y \
postgresql-${POSTGRES_VERSION} \
postgresql-client-${POSTGRES_VERSION} \
pgbackrest >/dev/null
systemctl stop postgresql@${POSTGRES_VERSION}-main || true
rm -rf /var/lib/postgresql/${POSTGRES_VERSION}/main
install -d -o postgres -g postgres -m 0700 /var/lib/postgresql/${POSTGRES_VERSION}/main
install -d -o postgres -g postgres -m 0750 /etc/pgbackrest
install -d -o postgres -g postgres -m 0750 /var/log/pgbackrest
"
incus file push "$PGBACKREST_CONF_HOST" "$DRILL_CONTAINER/etc/pgbackrest/pgbackrest.conf"
incus exec "$DRILL_CONTAINER" -- chown postgres:postgres /etc/pgbackrest/pgbackrest.conf
# Patch the conf so pg1-path points at the empty-dir we just made,
# and add `delta = y` for resumable restores. Stanza name and S3
# credentials carry over verbatim — the drill restores from the
# real prod repo (read-only via pgbackrest semantics).
incus exec "$DRILL_CONTAINER" -- bash -c "
sed -i 's|^pg1-path =.*|pg1-path = /var/lib/postgresql/${POSTGRES_VERSION}/main|' /etc/pgbackrest/pgbackrest.conf
echo 'delta = y' >> /etc/pgbackrest/pgbackrest.conf
"
# -----------------------------------------------------------------------------
# 3. Restore.
# -----------------------------------------------------------------------------
log "step 3: pgbackrest restore (latest backup, full WAL replay)"
incus exec "$DRILL_CONTAINER" -- sudo -u postgres \
pgbackrest --stanza="$PGBACKREST_STANZA" --log-level-console=info restore \
|| fail "restore failed" 1
# -----------------------------------------------------------------------------
# 4. Start postgres + smoke query.
# -----------------------------------------------------------------------------
log "step 4: starting postgres + waiting for ready"
incus exec "$DRILL_CONTAINER" -- bash -c "
systemctl start postgresql@${POSTGRES_VERSION}-main
for i in \$(seq 1 30); do
if sudo -u postgres pg_isready -p 5432 >/dev/null 2>&1; then
break
fi
sleep 1
done
"
if ! incus exec "$DRILL_CONTAINER" -- sudo -u postgres pg_isready -p 5432 >/dev/null 2>&1; then
fail "postgres did not become ready inside drill container" 1
fi
log "step 5: smoke query — SELECT count(*) FROM users"
users_count=$(incus exec "$DRILL_CONTAINER" -- sudo -u postgres \
psql -At -d veza -c 'select count(*) from users' 2>&1 || true)
log "users.count = $users_count (expecting >= $MIN_USERS_EXPECTED)"
if ! [[ "$users_count" =~ ^[0-9]+$ ]]; then
fail "users count is not numeric: '$users_count' (table missing? wrong db?)" 1
fi
if [ "$users_count" -lt "$MIN_USERS_EXPECTED" ]; then
fail "users count $users_count < expected $MIN_USERS_EXPECTED — backup may be broken" 1
fi
# -----------------------------------------------------------------------------
# 6. Verdict.
# -----------------------------------------------------------------------------
write_metric 1 "ok" "$SECONDS"
log "PASS: drill completed in ${SECONDS}s, users=$users_count"
exit 0