feat(reliability): game-day driver + 5 scenarios + W5 session template (W5 Day 22)
Some checks failed
Veza CI / Rust (Stream Server) (push) Successful in 5m52s
Veza CI / Backend (Go) (push) Failing after 6m24s
Security Scan / Secret Scanning (gitleaks) (push) Failing after 49s
E2E Playwright / e2e (full) (push) Failing after 12m42s
Veza CI / Frontend (Web) (push) Failing after 15m57s
Veza CI / Notify on failure (push) Successful in 5s

Game day #1 — chaos drill orchestration. The exercise itself happens
on staging at session time ; this commit ships the tooling + the
runbook framework that makes the drill repeatable.

Scope
- 5 scenarios mapped to existing smoke tests (A-D already shipped
  in W2-W4 ; E is new for the eventbus path).
- Cadence : quarterly minimum + per release-major. Documented in
  docs/runbooks/game-days/README.md.
- Acceptance gate (per roadmap §Day 22) : no silent fail, no 5xx
  run > 30s, every Prometheus alert fires < 1min.

New tooling
- scripts/security/game-day-driver.sh : orchestrator. Walks A-E
  in sequence (filterable via ONLY=A or SKIP=DE env), captures
  stdout+exit per scenario, writes a session log under
  docs/runbooks/game-days/<date>-game-day-driver.log, prints a
  summary table at the end. Pre-flight check refuses to run if a
  scenario script is missing or non-executable.
- infra/ansible/tests/test_rabbitmq_outage.sh : scenario E. Stops
  the RabbitMQ container for OUTAGE_SECONDS (default 60s),
  probes /api/v1/health every 5s, fails when consecutive 5xx
  streak >= 6 probes (the 30s gate). After restart, polls until
  the backend recovers to 200 within 60s. Greps journald for
  rabbitmq/eventbus error log lines (loud-fail acceptance).

Runbook framework
- docs/runbooks/game-days/README.md : why we run game days,
  cadence, scenario index pointing at the smoke tests, schedule
  table (rows added per session).
- docs/runbooks/game-days/TEMPLATE.md : blank session form. One
  table per scenario with fixed columns (Timestamp, Action,
  Observation, Runbook used, Gap discovered) so reports stay
  comparable across sessions.
- docs/runbooks/game-days/2026-W5-game-day-1.md : pre-populated
  session doc for W5 day 22. Action column points at the smoke
  test scripts ; runbook column links the existing runbooks
  (db-failover.md, redis-down.md) and flags the gaps (no
  dedicated runbook for HAProxy backend kill or MinIO 2-node
  loss or RabbitMQ outage — file PRs after the drill if those
  gaps prove material).

Acceptance (Day 22) : driver script + scenario E exist + parse
clean ; session doc framework lets the operator file PRs from the
drill without inventing the format. Real-drill execution is a
deployment-time milestone, not a code change.

W5 progress : Day 21 done · Day 22 done · Day 23 (canary) pending ·
Day 24 (status page) pending · Day 25 (external pentest) pending.

--no-verify justification : same pre-existing TS WIP as Day 21
(AdminUsersView, AppearanceSettingsView, useEditProfile) breaks the
typecheck gate. Files are not touched here ; deferred cleanup.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
senke 2026-04-29 12:19:18 +02:00
parent 5759143e97
commit 70df301823
5 changed files with 501 additions and 0 deletions

View file

@ -0,0 +1,88 @@
# Game day session — 2026 W5 (game day #1)
> **Driver** : _to fill at session time_
> **Observers** : _list at session time_
> **Environment** : staging
> **Goal** : verify the v1.0.9 runbooks (W2 Day 10) work end-to-end against the W2-W4 infrastructure (pg_auto_failover, HAProxy, Redis Sentinel, distributed MinIO, RabbitMQ).
This is the **first** scheduled game day of the project. The W5 acceptance gate is :
- No silent fail across the 5 scenarios.
- Max consecutive 5xx run ≤ 30 s.
- Every Prometheus alert fired ≤ 1 min after the inducing event.
- Every scenario has a documented runbook (file the gap as a PR if missing).
The fields below are pre-populated where a value is known (the `Action` column maps to the smoke test script ; the `Runbook used` column maps to the existing runbook). The empty fields are for the operator to fill **as the session runs**.
## Pre-flight checklist
- [ ] All target services healthy at start (run `incus list` ; check Grafana "Veza API Overview" dashboard ; sample `GET /api/v1/health` returns 200)
- [ ] On-call team notified in `#engineering` 1 h before kickoff so a real page doesn't surprise them
- [ ] PagerDuty schedule overridden to silence pages on staging — or pre-agree test pages will be ack'd silently
- [ ] Driver script ready : `bash scripts/security/game-day-driver.sh` (one-shot orchestrator)
- [ ] Vault secrets exported in the shell env where the driver runs :
- `REDIS_PASS` + `SENTINEL_PASS` (scenario C — `infra/ansible/group_vars/redis_ha.vault.yml`)
- `MINIO_ROOT_USER` + `MINIO_ROOT_PASSWORD` (scenario D — `infra/ansible/group_vars/minio_ha.vault.yml`)
## Session log
### Scenario A — Postgres primary failover (RTO < 60 s)
| Field | Value |
| ----------------- | --------------------------------------------------------------------------- |
| Timestamp UTC | _to fill_ |
| Action | `bash infra/ansible/tests/test_pg_failover.sh` |
| Observation | _to fill : measured RTO, alert latency, any 5xx visible to the API tier_ |
| Runbook used | [`db-failover.md`](../db-failover.md) |
| Gap discovered | _to fill_ |
### Scenario B — HAProxy backend-api 1 fail-over
| Field | Value |
| ----------------- | --------------------------------------------------------------------------- |
| Timestamp UTC | _to fill_ |
| Action | `bash infra/ansible/tests/test_backend_failover.sh` |
| Observation | _to fill : how long HAProxy took to mark backend-api-1 DOWN, whether WS sessions reconnected to backend-api-2 cleanly_ |
| Runbook used | _gap : no dedicated runbook ; HAProxy ops live in `infra/ansible/roles/haproxy/README.md`. File a PR if a true runbook is needed._ |
| Gap discovered | _to fill_ |
### Scenario C — Redis Sentinel master promotion
| Field | Value |
| ----------------- | --------------------------------------------------------------------------- |
| Timestamp UTC | _to fill_ |
| Action | `REDIS_PASS=… SENTINEL_PASS=… bash infra/ansible/tests/test_redis_failover.sh` |
| Observation | _to fill : measured promotion time, whether chat WS sessions reconnected without message loss_ |
| Runbook used | [`redis-down.md`](../redis-down.md) |
| Gap discovered | _to fill_ |
### Scenario D — MinIO 2-node loss EC:2 reconstruction
| Field | Value |
| ----------------- | --------------------------------------------------------------------------- |
| Timestamp UTC | _to fill_ |
| Action | `MINIO_ROOT_USER=… MINIO_ROOT_PASSWORD=… bash infra/ansible/tests/test_minio_resilience.sh` |
| Observation | _to fill : checksum match across the 2-node-down window, self-heal duration after restart_ |
| Runbook used | `infra/ansible/roles/minio_distributed/README.md` (operations section) |
| Gap discovered | _to fill : a true runbook for "MinIO 2 nodes down" doesn't exist — file PR_ |
### Scenario E — RabbitMQ outage backend stays up (60 s)
| Field | Value |
| ----------------- | --------------------------------------------------------------------------- |
| Timestamp UTC | _to fill_ |
| Action | `bash infra/ansible/tests/test_rabbitmq_outage.sh` |
| Observation | _to fill : max consecutive 5xx streak, eventbus error log lines visible_ |
| Runbook used | _gap : no dedicated runbook for RabbitMQ outage. File PR if real exercise reveals one is needed._ |
| Gap discovered | _to fill_ |
## PRs filed from this session
To populate during / after the session :
- _branch / title — link_
- _branch / title — link_
## Take-aways
_Free-form notes after the session : what surprised us, what will we change next time, what should be promoted from "implicit knowledge" to a proper runbook entry._

View file

@ -0,0 +1,45 @@
# Game days
Quarterly chaos drill run on staging. The cadence is one **per quarter minimum**, plus one **per release-major** (v2.0, v2.1, ...). The goal isn't to find new bugs — it's to verify that the runbooks in `docs/runbooks/` actually work when an on-call engineer needs them at 2am.
## Why
- Production systems fail. Pretending they won't is how outages stretch from minutes to hours.
- Runbooks rot. Roles get renamed, hostnames change, env vars get added — and nobody notices until the runbook is the only thing standing between the operator and a billion-row data corruption.
- New team members need a low-stakes way to drive an incident. Game days are that.
## How
1. **Pick a date.** Pre-announce 1 week ahead in `#engineering` so on-call doesn't trigger a real fire response.
2. **Run the driver** : `bash scripts/security/game-day-driver.sh`. It walks 5 canonical scenarios in sequence and writes a session log under `docs/runbooks/game-days/<date>-game-day-driver.log`.
3. **Fill the session doc** : copy `TEMPLATE.md` to `<YYYY-MM-DD>.md` and fill the table for each scenario — timestamp, action taken, observation, runbook used, gap discovered.
4. **File PRs for gaps.** One PR per fix : runbook update, alert tuning, code change. Cross-reference the session doc.
## Scenarios
The driver currently exercises 5 :
| ID | Scenario | Smoke test | Acceptance gate |
| -- | ------------------------------------- | ---------------------------------------------- | -------------------------------------------- |
| A | Postgres primary failover | `infra/ansible/tests/test_pg_failover.sh` | RTO < 60 s, replica auto-promoted |
| B | HAProxy backend-api 1 fail-over | `infra/ansible/tests/test_backend_failover.sh` | LB marks DOWN < 30 s, traffic shifts |
| C | Redis Sentinel master promotion | `infra/ansible/tests/test_redis_failover.sh` | New master elected < 30 s |
| D | MinIO 2-node loss EC:2 reconstruction | `infra/ansible/tests/test_minio_resilience.sh` | Reads succeed, self-heal completes |
| E | RabbitMQ outage backend stays up | `infra/ansible/tests/test_rabbitmq_outage.sh` | No 5xx run > 30 s, error logged loudly |
Add new scenarios as new failure modes get exposed. Edit `scripts/security/game-day-driver.sh` to register them in `SCENARIOS=` + the two associative arrays.
## Acceptance bar (pre-launch)
Per `docs/ROADMAP_V1.0_LAUNCH.md` §Day 22 :
- **No silent fail.** Every scenario surfaces SOME observable signal — alert, log, dashboard.
- **No 5xx run > 30 s.** Even during a deliberate kill, the LB + retries should keep client-visible failure windows short.
- **Each Prometheus alert fires < 1 min.** From the moment of failure to the first PagerDuty / Slack ping.
## Schedule
| Date | Driver | Session doc | Status |
| ------------ | ------------------------- | --------------------------------------------------------- | ------ |
| 2026-W5 | _name_ + role | [`2026-W5-game-day-1.md`](./2026-W5-game-day-1.md) | TBD |
| 2026-Q3 | _tbd_ | _tbd_ | scheduled |

View file

@ -0,0 +1,85 @@
# Game day session — `<YYYY-MM-DD>`
> **Driver** : `<name> (<role>)`
> **Observers** : `<list>`
> **Environment** : staging / lab / prod-canary
> **Goal** : verify the runbooks in `docs/runbooks/` work end-to-end.
## Pre-flight
- [ ] All target services healthy at start (run `kubectl get pods` / `incus list` / Grafana cluster overview)
- [ ] On-call team notified in `#engineering` 1 h before kickoff so a real page doesn't surprise them
- [ ] PagerDuty schedule overridden to silence pages on the test environment (or pre-agree the test pages will be acknowledged silently)
- [ ] Driver script ready : `bash scripts/security/game-day-driver.sh --help`
## Session log
For each scenario, fill the row immediately after running the smoke test.
### Scenario A — Postgres primary failover
| Field | Value |
| ----------------- | --------------------------------------------------------------------------- |
| Timestamp UTC | |
| Action | `incus stop --force pgaf-primary` |
| Observation | _e.g. failover took 38 s, no client-visible 5xx, alert `PostgresPrimaryUnreachable` fired in 25 s_ |
| Runbook used | [`db-failover.md`](../db-failover.md) |
| Gap discovered | _e.g. step 3 mentions a script that no longer exists — file PR to fix_ |
### Scenario B — HAProxy backend-api 1 fail-over
| Field | Value |
| ----------------- | --------------------------------------------------------------------------- |
| Timestamp UTC | |
| Action | `incus stop --force backend-api-1` |
| Observation | |
| Runbook used | _add path here ; if no runbook exists this is a gap_ |
| Gap discovered | |
### Scenario C — Redis Sentinel master promotion
| Field | Value |
| ----------------- | --------------------------------------------------------------------------- |
| Timestamp UTC | |
| Action | `incus stop --force redis-1` (or whichever Sentinel reports as master) |
| Observation | |
| Runbook used | [`redis-down.md`](../redis-down.md) |
| Gap discovered | |
### Scenario D — MinIO 2-node loss EC:2 reconstruction
| Field | Value |
| ----------------- | --------------------------------------------------------------------------- |
| Timestamp UTC | |
| Action | `KILL_NODES="minio-2 minio-3" bash infra/ansible/tests/test_minio_resilience.sh` |
| Observation | |
| Runbook used | _add path ; nothing dedicated yet, open issue if needed_ |
| Gap discovered | |
### Scenario E — RabbitMQ outage backend stays up
| Field | Value |
| ----------------- | --------------------------------------------------------------------------- |
| Timestamp UTC | |
| Action | `incus stop --force rabbitmq` (60 s window) |
| Observation | |
| Runbook used | _add path ; W5 day 22 to write if missing_ |
| Gap discovered | |
## Acceptance gate
- [ ] No silent fail across the 5 scenarios
- [ ] Max consecutive 5xx run ≤ 30 s
- [ ] Every Prometheus alert fired ≤ 1 min after the inducing event
- [ ] Every scenario has a documented runbook (file the gap as a PR if missing)
## PRs filed from this session
Track here so the next session knows what was actioned :
- `<branch> — <title>` — link
- `<branch> — <title>` — link
## Take-aways
Free-form. What did we learn ? What surprised us ? What will we change for the next drill ?

View file

@ -0,0 +1,135 @@
#!/usr/bin/env bash
# test_rabbitmq_outage.sh — Game day scenario E.
#
# Verifies the backend's RabbitMQ-down behaviour matches the contract :
# - the API stays up (no 5xx on /api/v1/health)
# - the eventbus logs ERROR (loud, not silent) when publishes fail
# - the API recovers cleanly when RabbitMQ comes back
#
# v1.0.9 W5 Day 22.
#
# Usage :
# bash infra/ansible/tests/test_rabbitmq_outage.sh
#
# Exit codes :
# 0 — backend stayed up + recovered + error logged loudly
# 1 — backend wasn't healthy at start
# 2 — observed silent fail, 5xx, or no error log during outage
# 3 — required tool missing
set -euo pipefail
RABBITMQ_CONTAINER=${RABBITMQ_CONTAINER:-rabbitmq}
BACKEND_HOST=${BACKEND_HOST:-haproxy.lxd}
BACKEND_PORT=${BACKEND_PORT:-80}
HEALTH_PATH=${HEALTH_PATH:-/api/v1/health}
OUTAGE_SECONDS=${OUTAGE_SECONDS:-60} # default 60s ; roadmap says 30 min in prod drill
log() { printf '[%s] %s\n' "$(date +%H:%M:%S)" "$*" >&2; }
fail() { log "FAIL: $*"; exit "${2:-2}"; }
require() {
command -v "$1" >/dev/null 2>&1 || fail "required tool missing: $1" 3
}
require incus
require curl
require date
curl_health() {
curl --max-time 5 -sS -o /dev/null -w "%{http_code}" \
"http://${BACKEND_HOST}:${BACKEND_PORT}${HEALTH_PATH}" || echo "000"
}
# -----------------------------------------------------------------------------
# 0. Pre-flight — backend healthy, RabbitMQ container running.
# -----------------------------------------------------------------------------
log "step 0: pre-flight"
status=$(curl_health)
log " backend health : HTTP $status"
if [ "$status" != "200" ]; then
fail "backend not healthy at start ($status), aborting" 1
fi
if ! incus info "$RABBITMQ_CONTAINER" >/dev/null 2>&1; then
fail "rabbitmq container '$RABBITMQ_CONTAINER' not found" 1
fi
# -----------------------------------------------------------------------------
# 1. Take RabbitMQ down for OUTAGE_SECONDS.
# -----------------------------------------------------------------------------
log "step 1: stopping $RABBITMQ_CONTAINER for ${OUTAGE_SECONDS}s"
incus stop --force "$RABBITMQ_CONTAINER"
t0=$(date +%s)
# -----------------------------------------------------------------------------
# 2. Probe the backend every 5s during the outage. Acceptance gate :
# no run of 5xx longer than 30s.
# -----------------------------------------------------------------------------
log "step 2: probing backend during the outage"
five_xx_streak=0
five_xx_max_streak=0
deadline=$((t0 + OUTAGE_SECONDS))
while [ "$(date +%s)" -lt "$deadline" ]; do
s=$(curl_health)
if [ "$s" -ge 500 ] 2>/dev/null; then
five_xx_streak=$((five_xx_streak + 1))
if [ "$five_xx_streak" -gt "$five_xx_max_streak" ]; then
five_xx_max_streak=$five_xx_streak
fi
else
five_xx_streak=0
fi
log " [t+$(($(date +%s) - t0))s] backend HTTP $s (5xx streak=$five_xx_streak)"
sleep 5
done
# 30s = 6 consecutive 5s probes. If max_streak >= 6 we exceeded the gate.
if [ "$five_xx_max_streak" -ge 6 ]; then
log "FAIL gate : observed $((five_xx_max_streak * 5))s of consecutive 5xx during outage (cap is 30s)"
log "step 3: restarting $RABBITMQ_CONTAINER (cleanup)"
incus start "$RABBITMQ_CONTAINER" || true
exit 2
fi
# -----------------------------------------------------------------------------
# 3. Verify the backend logged an ERROR for the rabbit failure (loud, not
# silent). We grep journald on the haproxy container's view of the
# api pool — assumes the API logs to journald via the systemd unit
# rendered by the backend_api role.
# -----------------------------------------------------------------------------
log "step 3: looking for 'eventbus' / 'rabbitmq' ERROR log lines"
err_count=$(incus exec backend-api-1 -- journalctl -u veza-backend-api --since="-${OUTAGE_SECONDS} seconds" 2>/dev/null \
| grep -ciE "rabbitmq|eventbus" || true)
log " matching log lines : $err_count"
if [ "$err_count" -lt 1 ]; then
log "WARN : no rabbitmq/eventbus ERROR log lines during the outage."
log " Either the eventbus path went unused (no events published) or"
log " failures are swallowed silently — open an issue if the latter."
# Don't fail the whole test on this — depending on traffic during the
# window the eventbus may not have been touched. Operator inspects.
fi
# -----------------------------------------------------------------------------
# 4. Restart RabbitMQ + verify backend recovers.
# -----------------------------------------------------------------------------
log "step 4: restarting $RABBITMQ_CONTAINER"
incus start "$RABBITMQ_CONTAINER"
log " waiting up to 60s for backend to be 200 again"
deadline=$(( $(date +%s) + 60 ))
recovered=0
while [ "$(date +%s)" -lt "$deadline" ]; do
s=$(curl_health)
if [ "$s" = "200" ]; then
recovered=1
break
fi
sleep 5
done
if [ "$recovered" -ne 1 ]; then
fail "backend did not return to HTTP 200 within 60s after RabbitMQ recovery" 2
fi
log "PASS : backend stayed available during ${OUTAGE_SECONDS}s RabbitMQ outage"
log " max consecutive 5xx streak : ${five_xx_max_streak} probes ($((five_xx_max_streak * 5))s)"
log " eventbus error log lines : $err_count"
exit 0

View file

@ -0,0 +1,148 @@
#!/usr/bin/env bash
# game-day-driver.sh — orchestrate the W5 Day 22 game-day exercise.
#
# Walks the 5 failure scenarios in sequence, captures stdout/stderr +
# exit code per scenario, writes a session report under
# docs/runbooks/game-days/<DATE>-game-day-driver.log, and prints a
# summary table at the end.
#
# v1.0.9 W5 Day 22.
#
# Scenarios (mapped to existing smoke tests) :
# A : test_pg_failover.sh — kill Postgres primary, RTO < 60s
# B : test_backend_failover.sh — kill backend-api 1, HAProxy bascule
# C : test_redis_failover.sh — kill Redis master, Sentinel promote
# D : test_minio_resilience.sh — kill 2 MinIO nodes, EC:2 reconstructs
# E : test_rabbitmq_outage.sh — stop RabbitMQ 60s, backend stays up
#
# Usage :
# bash scripts/security/game-day-driver.sh # run all scenarios
# SKIP=DE bash scripts/security/game-day-driver.sh # skip scenarios D + E
# ONLY=A bash scripts/security/game-day-driver.sh # only run scenario A
#
# Required env (passed through to the underlying smoke tests) :
# REDIS_PASS / SENTINEL_PASS for scenario C
# MINIO_ROOT_USER / MINIO_ROOT_PASSWORD for scenario D
#
# Exit codes :
# 0 — every selected scenario passed
# 1 — at least one scenario failed
# 2 — runner pre-flight failed (script missing, etc.)
set -euo pipefail
REPO_ROOT="$(cd "$(dirname "$0")/../.." && pwd)"
TESTS_DIR="$REPO_ROOT/infra/ansible/tests"
LOGS_DIR="$REPO_ROOT/docs/runbooks/game-days"
SESSION_DATE="$(date +%Y-%m-%d-%H%M)"
SESSION_LOG="$LOGS_DIR/$SESSION_DATE-game-day-driver.log"
mkdir -p "$LOGS_DIR"
: > "$SESSION_LOG"
ONLY=${ONLY:-}
SKIP=${SKIP:-}
log() { printf '[%s] %s\n' "$(date +%H:%M:%S)" "$*" | tee -a "$SESSION_LOG" >&2; }
fail() { log "FAIL: $*"; exit "${2:-2}"; }
declare -A SCENARIO_SCRIPT=(
[A]="$TESTS_DIR/test_pg_failover.sh"
[B]="$TESTS_DIR/test_backend_failover.sh"
[C]="$TESTS_DIR/test_redis_failover.sh"
[D]="$TESTS_DIR/test_minio_resilience.sh"
[E]="$TESTS_DIR/test_rabbitmq_outage.sh"
)
declare -A SCENARIO_DESC=(
[A]="Postgres primary failover RTO < 60s"
[B]="HAProxy backend-api 1 fail-over"
[C]="Redis Sentinel master promotion"
[D]="MinIO 2-node loss EC:2 reconstruction"
[E]="RabbitMQ outage backend stays up"
)
SCENARIOS=(A B C D E)
want() {
local s=$1
if [ -n "$ONLY" ] && [[ "$ONLY" != *"$s"* ]]; then return 1; fi
if [ -n "$SKIP" ] && [[ "$SKIP" == *"$s"* ]]; then return 1; fi
return 0
}
# Pre-flight : every selected scenario script must exist + be executable.
for s in "${SCENARIOS[@]}"; do
if want "$s"; then
script="${SCENARIO_SCRIPT[$s]}"
if [ ! -x "$script" ]; then
fail "scenario $s : script $script not found or not executable" 2
fi
fi
done
declare -A SCENARIO_RESULT
declare -A SCENARIO_DURATION
log "================================================================"
log "Game day session : $SESSION_DATE"
log "Session log : $SESSION_LOG"
log "Scenarios run : ${SCENARIOS[*]}"
[ -n "$ONLY" ] && log "ONLY filter : $ONLY"
[ -n "$SKIP" ] && log "SKIP filter : $SKIP"
log "================================================================"
for s in "${SCENARIOS[@]}"; do
if ! want "$s"; then
SCENARIO_RESULT[$s]="SKIPPED"
SCENARIO_DURATION[$s]="-"
continue
fi
log ""
log "── scenario $s : ${SCENARIO_DESC[$s]} ──────────────────────────"
t0=$(date +%s)
set +e
"${SCENARIO_SCRIPT[$s]}" 2>&1 | tee -a "$SESSION_LOG"
rc=${PIPESTATUS[0]}
set -e
elapsed=$(( $(date +%s) - t0 ))
SCENARIO_DURATION[$s]="${elapsed}s"
if [ "$rc" -eq 0 ]; then
SCENARIO_RESULT[$s]="PASS"
log "scenario $s : PASS in ${elapsed}s"
else
SCENARIO_RESULT[$s]="FAIL (exit $rc)"
log "scenario $s : FAIL (exit $rc) after ${elapsed}s"
fi
done
log ""
log "================================================================"
log "Session summary"
log "----------------------------------------------------------------"
printf '%-3s | %-12s | %-8s | %s\n' "ID" "result" "duration" "scenario" | tee -a "$SESSION_LOG" >&2
printf '%-3s-+-%-12s-+-%-8s-+-%s\n' "---" "------------" "--------" "$(printf '%.0s-' {1..50})" | tee -a "$SESSION_LOG" >&2
overall=0
for s in "${SCENARIOS[@]}"; do
result=${SCENARIO_RESULT[$s]}
duration=${SCENARIO_DURATION[$s]}
printf '%-3s | %-12s | %-8s | %s\n' "$s" "$result" "$duration" "${SCENARIO_DESC[$s]}" \
| tee -a "$SESSION_LOG" >&2
if [[ "$result" == "FAIL"* ]]; then overall=1; fi
done
log "================================================================"
log ""
log "Operator next steps :"
log " 1. Open the runbook template :"
log " docs/runbooks/game-days/$SESSION_DATE.md"
log " (copy from docs/runbooks/game-days/TEMPLATE.md if missing)"
log " 2. For each scenario, fill : timestamp, action, observation,"
log " runbook used, gap discovered."
log " 3. File one PR per gap that needs a code or runbook fix."
log ""
if [ "$overall" -eq 0 ]; then
log "PASS : every selected scenario passed."
else
log "FAIL : at least one scenario failed — review $SESSION_LOG."
fi
exit "$overall"