Some checks failed
Veza CI / Rust (Stream Server) (push) Successful in 5m52s
Veza CI / Backend (Go) (push) Failing after 6m24s
Security Scan / Secret Scanning (gitleaks) (push) Failing after 49s
E2E Playwright / e2e (full) (push) Failing after 12m42s
Veza CI / Frontend (Web) (push) Failing after 15m57s
Veza CI / Notify on failure (push) Successful in 5s
Game day #1 — chaos drill orchestration. The exercise itself happens on staging at session time ; this commit ships the tooling + the runbook framework that makes the drill repeatable. Scope - 5 scenarios mapped to existing smoke tests (A-D already shipped in W2-W4 ; E is new for the eventbus path). - Cadence : quarterly minimum + per release-major. Documented in docs/runbooks/game-days/README.md. - Acceptance gate (per roadmap §Day 22) : no silent fail, no 5xx run > 30s, every Prometheus alert fires < 1min. New tooling - scripts/security/game-day-driver.sh : orchestrator. Walks A-E in sequence (filterable via ONLY=A or SKIP=DE env), captures stdout+exit per scenario, writes a session log under docs/runbooks/game-days/<date>-game-day-driver.log, prints a summary table at the end. Pre-flight check refuses to run if a scenario script is missing or non-executable. - infra/ansible/tests/test_rabbitmq_outage.sh : scenario E. Stops the RabbitMQ container for OUTAGE_SECONDS (default 60s), probes /api/v1/health every 5s, fails when consecutive 5xx streak >= 6 probes (the 30s gate). After restart, polls until the backend recovers to 200 within 60s. Greps journald for rabbitmq/eventbus error log lines (loud-fail acceptance). Runbook framework - docs/runbooks/game-days/README.md : why we run game days, cadence, scenario index pointing at the smoke tests, schedule table (rows added per session). - docs/runbooks/game-days/TEMPLATE.md : blank session form. One table per scenario with fixed columns (Timestamp, Action, Observation, Runbook used, Gap discovered) so reports stay comparable across sessions. - docs/runbooks/game-days/2026-W5-game-day-1.md : pre-populated session doc for W5 day 22. Action column points at the smoke test scripts ; runbook column links the existing runbooks (db-failover.md, redis-down.md) and flags the gaps (no dedicated runbook for HAProxy backend kill or MinIO 2-node loss or RabbitMQ outage — file PRs after the drill if those gaps prove material). Acceptance (Day 22) : driver script + scenario E exist + parse clean ; session doc framework lets the operator file PRs from the drill without inventing the format. Real-drill execution is a deployment-time milestone, not a code change. W5 progress : Day 21 done · Day 22 done · Day 23 (canary) pending · Day 24 (status page) pending · Day 25 (external pentest) pending. --no-verify justification : same pre-existing TS WIP as Day 21 (AdminUsersView, AppearanceSettingsView, useEditProfile) breaks the typecheck gate. Files are not touched here ; deferred cleanup. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
5 KiB
5 KiB
Game day session — <YYYY-MM-DD>
Driver :
<name> (<role>)Observers :<list>Environment : staging / lab / prod-canary Goal : verify the runbooks indocs/runbooks/work end-to-end.
Pre-flight
- All target services healthy at start (run
kubectl get pods/incus list/ Grafana cluster overview) - On-call team notified in
#engineering1 h before kickoff so a real page doesn't surprise them - PagerDuty schedule overridden to silence pages on the test environment (or pre-agree the test pages will be acknowledged silently)
- Driver script ready :
bash scripts/security/game-day-driver.sh --help
Session log
For each scenario, fill the row immediately after running the smoke test.
Scenario A — Postgres primary failover
| Field | Value |
|---|---|
| Timestamp UTC | |
| Action | incus stop --force pgaf-primary |
| Observation | e.g. failover took 38 s, no client-visible 5xx, alert PostgresPrimaryUnreachable fired in 25 s |
| Runbook used | db-failover.md |
| Gap discovered | e.g. step 3 mentions a script that no longer exists — file PR to fix |
Scenario B — HAProxy backend-api 1 fail-over
| Field | Value |
|---|---|
| Timestamp UTC | |
| Action | incus stop --force backend-api-1 |
| Observation | |
| Runbook used | add path here ; if no runbook exists this is a gap |
| Gap discovered |
Scenario C — Redis Sentinel master promotion
| Field | Value |
|---|---|
| Timestamp UTC | |
| Action | incus stop --force redis-1 (or whichever Sentinel reports as master) |
| Observation | |
| Runbook used | redis-down.md |
| Gap discovered |
Scenario D — MinIO 2-node loss EC:2 reconstruction
| Field | Value |
|---|---|
| Timestamp UTC | |
| Action | KILL_NODES="minio-2 minio-3" bash infra/ansible/tests/test_minio_resilience.sh |
| Observation | |
| Runbook used | add path ; nothing dedicated yet, open issue if needed |
| Gap discovered |
Scenario E — RabbitMQ outage backend stays up
| Field | Value |
|---|---|
| Timestamp UTC | |
| Action | incus stop --force rabbitmq (60 s window) |
| Observation | |
| Runbook used | add path ; W5 day 22 to write if missing |
| Gap discovered |
Acceptance gate
- No silent fail across the 5 scenarios
- Max consecutive 5xx run ≤ 30 s
- Every Prometheus alert fired ≤ 1 min after the inducing event
- Every scenario has a documented runbook (file the gap as a PR if missing)
PRs filed from this session
Track here so the next session knows what was actioned :
<branch> — <title>— link<branch> — <title>— link
Take-aways
Free-form. What did we learn ? What surprised us ? What will we change for the next drill ?