veza/infra/ansible/roles/postgres_ha/tasks/install.yml
senke c941aba3d2
Some checks failed
Veza CI / Notify on failure (push) Blocked by required conditions
Veza CI / Rust (Stream Server) (push) Successful in 3m45s
Security Scan / Secret Scanning (gitleaks) (push) Successful in 1m0s
Veza CI / Backend (Go) (push) Successful in 5m38s
Veza CI / Frontend (Web) (push) Has been cancelled
E2E Playwright / e2e (full) (push) Has been cancelled
feat(infra): postgres_ha role + pg_auto_failover formation + RTO test (W2 Day 6)
ROADMAP_V1.0_LAUNCH.md §Semaine 2 day 6 deliverable: Postgres HA
ready to fail over in < 60s, asserted by an automated test script.

Topology — 3 Incus containers per environment:
  pgaf-monitor   pg_auto_failover state machine (single instance)
  pgaf-primary   first registered → primary
  pgaf-replica   second registered → hot-standby (sync rep)

Files:
  infra/ansible/playbooks/postgres_ha.yml
    Provisions the 3 containers via `incus launch images:ubuntu/22.04`
    on the incus_hosts group, applies `common` baseline, then runs
    `postgres_ha` on monitor first, then on data nodes serially
    (primary registers before replica — pg_auto_failover assigns
    roles by registration order, no manual flag needed).

  infra/ansible/roles/postgres_ha/
    defaults/main.yml — postgres_version pinned to 16, sync-standbys
      = 1, replication-quorum = true. App user/dbname for the
      formation. Password sourced from vault (placeholder default
      `changeme-DEV-ONLY` so missing vault doesn't silently set a
      weak prod password — the role reads the value but does NOT
      auto-create the app user; that's a follow-up via psql/SQL
      provisioning when the backend wires DATABASE_URL.).
    tasks/install.yml — PGDG apt repo + postgresql-16 +
      postgresql-16-auto-failover + pg-auto-failover-cli +
      python3-psycopg2. Stops the default postgres@16-main service
      because pg_auto_failover manages its own instance.
    tasks/monitor.yml — `pg_autoctl create monitor`, gated on the
      absence of `<pgdata>/postgresql.conf` so re-runs no-op.
      Renders systemd unit `pg_autoctl.service` and starts it.
    tasks/node.yml — `pg_autoctl create postgres` joining the
      monitor URI from defaults. Sets formation sync-standbys
      policy idempotently from any node.
    templates/pg_autoctl-{monitor,node}.service.j2 — minimal
      systemd units, Restart=on-failure, NOFILE=65536.
    README.md — operations cheatsheet (state, URI, manual failover),
      vault setup, ops scope (PgBouncer + pgBackRest + multi-region
      explicitly out — landing W2 day 7-8 + v1.2+).

  infra/ansible/inventory/lab.yml
    Added `postgres_ha` group (with sub-groups `postgres_ha_monitor`
    + `postgres_ha_nodes`) wired to the `community.general.incus`
    connection plugin so Ansible reaches each container via
    `incus exec` on the lab host — no in-container SSH setup.

  infra/ansible/tests/test_pg_failover.sh
    The acceptance script. Sequence:
      0. read formation state via monitor — abort if degraded baseline
      1. `incus stop --force pgaf-primary` — start RTO timer
      2. poll monitor every 1s for the standby's promotion
      3. `incus start pgaf-primary` so the lab returns to a 2-node
         healthy state for the next run
      4. fail unless promotion happened within RTO_TARGET_SECONDS=60
    Exit codes 0/1/2/3 (pass / unhealthy baseline / timeout / missing
    tool) so a CI cron can plug in directly later.

Acceptance verified locally:
  $ ansible-playbook -i inventory/lab.yml playbooks/postgres_ha.yml \
      --syntax-check
  playbook: playbooks/postgres_ha.yml          ← clean
  $ ansible-playbook -i inventory/lab.yml playbooks/postgres_ha.yml \
      --list-tasks
  4 plays, 22 tasks across plays, all tagged.
  $ bash -n infra/ansible/tests/test_pg_failover.sh
  syntax OK

Real `--check` + apply requires SSH access to the R720 + the
community.general collection installed (`ansible-galaxy collection
install community.general`). Operator runs that step.

Out of scope here (per ROADMAP §2 deferred):
  - Multi-host data nodes (W2 day 7+ when Hetzner standby lands)
  - HA monitor — single-monitor is fine for v1.0 scale
  - PgBouncer (W2 day 7), pgBackRest (W2 day 8), OTel collector (W2 day 9)

SKIP_TESTS=1 — IaC YAML + bash, no app code.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 18:27:46 +02:00

54 lines
1.6 KiB
YAML

# Install Postgres + pg_auto_failover from the upstream PGDG repo.
# PGDG ships pg-auto-failover-NN packages alongside postgresql-NN, so
# version-pinning the postgres_version pins both.
---
- name: Add PGDG apt signing key
ansible.builtin.get_url:
url: "{{ postgres_apt_key_url }}"
dest: /etc/apt/keyrings/postgresql.asc
mode: "0644"
force: false
- name: Add PGDG apt source
ansible.builtin.copy:
dest: /etc/apt/sources.list.d/pgdg.sources
owner: root
group: root
mode: "0644"
content: |
Enabled: yes
Types: deb
URIs: https://apt.postgresql.org/pub/repos/apt
Suites: {{ ansible_distribution_release }}-pgdg
Components: main
Signed-By: /etc/apt/keyrings/postgresql.asc
- name: Update apt cache (PGDG repo just added)
ansible.builtin.apt:
update_cache: true
changed_when: false
- name: Install Postgres + pg_auto_failover packages
ansible.builtin.apt:
name:
- "postgresql-{{ postgres_version }}"
- "postgresql-client-{{ postgres_version }}"
- "pg-auto-failover-cli"
- "postgresql-{{ postgres_version }}-auto-failover"
- python3-psycopg2 # for Ansible postgresql_db / postgresql_user modules
state: present
- name: Stop the default postgres cluster (pg_auto_failover manages its own)
ansible.builtin.service:
name: "postgresql@{{ postgres_version }}-main"
state: stopped
enabled: false
failed_when: false
- name: Ensure pg_auto_failover state dir exists, owned by postgres
ansible.builtin.file:
path: "{{ pg_auto_failover_state_dir }}"
state: directory
owner: postgres
group: postgres
mode: "0700"