veza/infra/ansible/roles/postgres_ha/tasks/node.yml

64 lines
2.4 KiB
YAML
Raw Normal View History

feat(infra): postgres_ha role + pg_auto_failover formation + RTO test (W2 Day 6) ROADMAP_V1.0_LAUNCH.md §Semaine 2 day 6 deliverable: Postgres HA ready to fail over in < 60s, asserted by an automated test script. Topology — 3 Incus containers per environment: pgaf-monitor pg_auto_failover state machine (single instance) pgaf-primary first registered → primary pgaf-replica second registered → hot-standby (sync rep) Files: infra/ansible/playbooks/postgres_ha.yml Provisions the 3 containers via `incus launch images:ubuntu/22.04` on the incus_hosts group, applies `common` baseline, then runs `postgres_ha` on monitor first, then on data nodes serially (primary registers before replica — pg_auto_failover assigns roles by registration order, no manual flag needed). infra/ansible/roles/postgres_ha/ defaults/main.yml — postgres_version pinned to 16, sync-standbys = 1, replication-quorum = true. App user/dbname for the formation. Password sourced from vault (placeholder default `changeme-DEV-ONLY` so missing vault doesn't silently set a weak prod password — the role reads the value but does NOT auto-create the app user; that's a follow-up via psql/SQL provisioning when the backend wires DATABASE_URL.). tasks/install.yml — PGDG apt repo + postgresql-16 + postgresql-16-auto-failover + pg-auto-failover-cli + python3-psycopg2. Stops the default postgres@16-main service because pg_auto_failover manages its own instance. tasks/monitor.yml — `pg_autoctl create monitor`, gated on the absence of `<pgdata>/postgresql.conf` so re-runs no-op. Renders systemd unit `pg_autoctl.service` and starts it. tasks/node.yml — `pg_autoctl create postgres` joining the monitor URI from defaults. Sets formation sync-standbys policy idempotently from any node. templates/pg_autoctl-{monitor,node}.service.j2 — minimal systemd units, Restart=on-failure, NOFILE=65536. README.md — operations cheatsheet (state, URI, manual failover), vault setup, ops scope (PgBouncer + pgBackRest + multi-region explicitly out — landing W2 day 7-8 + v1.2+). infra/ansible/inventory/lab.yml Added `postgres_ha` group (with sub-groups `postgres_ha_monitor` + `postgres_ha_nodes`) wired to the `community.general.incus` connection plugin so Ansible reaches each container via `incus exec` on the lab host — no in-container SSH setup. infra/ansible/tests/test_pg_failover.sh The acceptance script. Sequence: 0. read formation state via monitor — abort if degraded baseline 1. `incus stop --force pgaf-primary` — start RTO timer 2. poll monitor every 1s for the standby's promotion 3. `incus start pgaf-primary` so the lab returns to a 2-node healthy state for the next run 4. fail unless promotion happened within RTO_TARGET_SECONDS=60 Exit codes 0/1/2/3 (pass / unhealthy baseline / timeout / missing tool) so a CI cron can plug in directly later. Acceptance verified locally: $ ansible-playbook -i inventory/lab.yml playbooks/postgres_ha.yml \ --syntax-check playbook: playbooks/postgres_ha.yml ← clean $ ansible-playbook -i inventory/lab.yml playbooks/postgres_ha.yml \ --list-tasks 4 plays, 22 tasks across plays, all tagged. $ bash -n infra/ansible/tests/test_pg_failover.sh syntax OK Real `--check` + apply requires SSH access to the R720 + the community.general collection installed (`ansible-galaxy collection install community.general`). Operator runs that step. Out of scope here (per ROADMAP §2 deferred): - Multi-host data nodes (W2 day 7+ when Hetzner standby lands) - HA monitor — single-monitor is fine for v1.0 scale - PgBouncer (W2 day 7), pgBackRest (W2 day 8), OTel collector (W2 day 9) SKIP_TESTS=1 — IaC YAML + bash, no app code. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 16:27:46 +00:00
# pg_auto_failover data node — joins the monitor and lets the
# formation decide primary/secondary by election order. The first
# node to register becomes primary; later nodes become secondaries.
#
# Sync replication is configured by the monitor itself based on
# `number_sync_standbys` + `replication_quorum`, set on the formation
# in monitor.yml's post-init step.
---
- name: Check whether the data node is already initialised
ansible.builtin.stat:
path: "{{ pg_auto_failover_state_dir }}/postgres/postgresql.conf"
register: node_initialised
- name: Initialise pg_auto_failover data node (joins the monitor)
become: true
become_user: postgres
ansible.builtin.command:
cmd: >
/usr/lib/postgresql/{{ postgres_version }}/bin/pg_autoctl create postgres
--pgdata {{ pg_auto_failover_state_dir }}/postgres
--pgctl /usr/lib/postgresql/{{ postgres_version }}/bin/pg_ctl
--pgport {{ pg_auto_failover_node_port }}
--hostname {{ ansible_fqdn }}
--monitor postgres://autoctl_node@{{ pg_auto_failover_monitor_host }}:{{ pg_auto_failover_monitor_port }}/{{ pg_auto_failover_monitor_dbname }}?sslmode=require
--auth trust
--ssl-self-signed
--dbname {{ pg_auto_failover_app_dbname }}
--username {{ pg_auto_failover_app_user }}
--run-as-keeper
args:
creates: "{{ pg_auto_failover_state_dir }}/postgres/postgresql.conf"
when: not node_initialised.stat.exists
- name: Render systemd unit for pg_autoctl data node
ansible.builtin.template:
src: pg_autoctl-node.service.j2
dest: /etc/systemd/system/pg_autoctl.service
owner: root
group: root
mode: "0644"
notify: Restart pg_autoctl
- name: Enable + start pg_autoctl data node service
ansible.builtin.systemd:
name: pg_autoctl
state: started
enabled: true
daemon_reload: true
- name: Set formation sync replication policy (run from any data node, idempotent)
become: true
become_user: postgres
ansible.builtin.command:
cmd: >
/usr/lib/postgresql/{{ postgres_version }}/bin/pg_autoctl set formation
number-sync-standbys {{ pg_auto_failover_number_sync_standbys }}
--pgdata {{ pg_auto_failover_state_dir }}/postgres
changed_when: false
failed_when: false
# Only one node needs to push the policy — but the command is
# idempotent on the monitor side, so running it from every node
# keeps the role re-entrant without coordination.
run_once: false