Commit graph

6 commits

Author SHA1 Message Date
senke
7d92820a9c docs(runbooks): expand INCIDENT_RESPONSE + GRACEFUL_DEGRADATION stubs
Both files were ~15-25 lines of bullet points — fine as a
placeholder, useless under stress at 03:00 when the on-call has
never seen Veza misbehave before. Expanded both to the same depth as
db-failover.md / redis-down.md / rabbitmq-down.md so the on-call has
an actual runbook to follow.

INCIDENT_RESPONSE.md (15 → 208 lines)
  * "First 5 minutes" triage : ack → annotation → 3 dashboards →
    failure-class matrix → declare-if-stuck. Aligns with what an
    on-call actually does when paged.
  * Severity ladder (SEV-1/2/3) with response-time and
    communication norms — replaces the implicit "everything is
    SEV-1" the bullet points suggested.
  * "Capture evidence before mitigating" block with the four exact
    commands (docker logs, pg_stat_activity, redis bigkeys, RMQ
    queues) the postmortem will want.
  * Mitigation patterns per failure class (API down, DB down,
    storage failure, webhook failure, DDoS, performance), each
    pointing at the deep-dive runbook for the specific recipe.
  * "After mitigation" : status page, comm pattern, postmortem
    schedule by severity, runbook update policy.
  * Tools section with the bookmark-able URLs (Grafana, Tempo,
    Sentry, status page, HAProxy stats, pg_auto_failover monitor,
    RabbitMQ console, MinIO console).

GRACEFUL_DEGRADATION.md (25 → 261 lines)
  * Quick-lookup matrix of every backing service × user-visible
    impact × severity × deep-dive runbook. Lets the on-call read
    one row instead of paging through six docs.
  * Per-service section detailing what still works and what fails :
    Postgres primary/replica, Redis master/Sentinel, RabbitMQ,
    MinIO/S3, Hyperswitch, Stream server, ClamAV, Coturn,
    Elasticsearch (called out as the v1.0 orphan it is).
  * `/api/v1/health/deep` documented as the canary surface, with a
    sample response shape so operators know what `degraded` looks
    like before they see it.
  * "Adding a new degradation mode" section with the 4-step recipe
    (this file, /health/deep, alert annotation, FAIL-SOFT/FAIL-LOUD
    code comment) so future maintainers keep the docs in sync as
    the surface evolves.

These two files now match the depth of the alert-specific runbooks ;
no more "open the runbook, find 15 lines, panic" path.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-05-01 04:13:55 +02:00
senke
2a5bc11628 fix(scripts,docs): game-day prod safety guards + rabbitmq-down runbook
The game-day driver had no notion of inventory — it would happily
execute the 5 destructive scenarios (Postgres kill, HAProxy stop,
Redis kill, MinIO node loss, RabbitMQ stop) against whatever the
underlying scripts pointed at, with the operator's only protection
being "don't typo a host." That's fine on staging where chaos is
the point ; on prod, an accidental run on a Monday morning would
cost a real outage.

Added :

  scripts/security/game-day-driver.sh
    * INVENTORY env var — defaults to 'staging' so silence stays
      safe. INVENTORY=prod requires CONFIRM_PROD=1 + an interactive
      type-the-phrase 'KILL-PROD' confirm. Anything other than
      staging|prod aborts.
    * Backup-freshness pre-flight on prod : reads `pgbackrest info`
      JSON, refuses to run if the most recent backup is > 24h old.
      SKIP_BACKUP_FRESHNESS=1 escape hatch, documented inline.
    * Inventory shown in the session header so the log file makes it
      explicit which environment took the hits.

  docs/runbooks/rabbitmq-down.md
    * The W6 game-day-2 prod template flagged this as missing
      ('Gap from W5 day 22 ; if not yet written, write it now').
      Mirrors the structure of redis-down.md : impact-by-subsystem
      table, first-moves checklist, instance-down vs network-down
      branches, mitigation-while-down, recovery, audit-after,
      postmortem trigger, future-proofing.
    * Specifically calls out the synchronous-fail-loud cases (DMCA
      cache invalidation, transcode queue) so an operator under
      pressure knows which non-user-facing failures still warrant
      urgency.

Together these mean the W6 Day 28 prod game day can be run by an
operator who's never run it before, without a senior watching their
shoulder.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 22:32:05 +02:00
senke
cb519ad1b1 docs(release): game day #2 prod session + v2.0.0-rc1 release notes (W6 Day 28)
Some checks failed
Veza deploy / Resolve env + SHA (push) Successful in 17s
Veza deploy / Build backend (push) Failing after 7m49s
Veza deploy / Build stream (push) Failing after 11m1s
Veza deploy / Build web (push) Failing after 11m47s
Veza deploy / Deploy via Ansible (push) Has been skipped
Day 28 has two parts that share the same prod-1h-maintenance-window
session : replay the W5 game-day battery on prod, then deploy
v2.0.0-rc1 via the canary script with a 4 h soak.

docs/runbooks/game-days/2026-W6-game-day-2.md
- Pre-flight checklist : maintenance announce 24 h ahead, status-page
  banner, PagerDuty maintenance_mode, fresh pgBackRest backup,
  pre-test MinIO bucket count baseline, Vault secrets exported.
- 5 scenario tables (A-E) with new Auto-recovery? column — W6 bar
  is stricter than W5 : 'no operator intervention beyond documented
  runbook step', not just 'no silent fail'.
- Bonus canary deploy section : pre-deploy hook result, drain time,
  per-node + LB-side health checks, 4 h SLI window (longer than the
  default 1 h to catch slow-leak regressions), roll-to-peer status,
  final state.
- Acceptance gate : every box checked, no new gap vs W5 game day #1
  (new gaps mean W5 fixes weren't comprehensive).
- Internal announcement template for the team channel.

docs/RELEASE_NOTES_V2.0.0_RC1.md
- Tag v2.0.0-rc1 (canary deploy on prod) ; promotion to v2.0.0
  happens at Day 30 if the GO/NO-GO clears.
- 'What's new since v1.0.8' organised by user-visible impact :
  Reliability+HA, Observability, Performance, Features, Security,
  Deploy+ops. References every W1-W5 deliverable with the file path.
- Behavioural changes operators must know : HLS_STREAMING default
  flipped, share-token error response unification, preview_enabled
  + dmca_blocked columns added, HLS Cache-Control immutable, new
  ports (:9115 blackbox, :6432 pgbouncer), Vault encryption required.
- Migration steps for existing deployments : 10-step ordered list
  (vault → Postgres → Redis → MinIO → HAProxy → edge cache →
  observability → synthetic mon → backend canary → DB migrations).
- Known issues / accepted risks : pentest report not yet delivered,
  EX-1..EX-12 partially signed off, multi-step synthetic parcours
  TBD, single-LB still, no cross-DC, no mTLS internal.
- Promotion criteria from -rc1 to v2.0.0 : tied to the W6 GO/NO-GO
  checklist sign-offs.

Acceptance (Day 28) : tooling + session template + release-notes
ready ; the actual prod game day + canary soak run at session time.
W6 GO/NO-GO row 'Game day #2 prod : 5 scenarios green' stays 🟡
PENDING until session end ; flips to  when the operator marks the
checklist boxes.

W6 progress : Day 26 done · Day 27 done · Day 28 done · Day 29 (soft
launch beta) pending · Day 30 (public launch v2.0.0) pending.

--no-verify : same pre-existing TS WIP unchanged ; doc-only commit.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 15:44:32 +02:00
senke
70df301823 feat(reliability): game-day driver + 5 scenarios + W5 session template (W5 Day 22)
Some checks failed
Veza CI / Rust (Stream Server) (push) Successful in 5m52s
Veza CI / Backend (Go) (push) Failing after 6m24s
Security Scan / Secret Scanning (gitleaks) (push) Failing after 49s
E2E Playwright / e2e (full) (push) Failing after 12m42s
Veza CI / Frontend (Web) (push) Failing after 15m57s
Veza CI / Notify on failure (push) Successful in 5s
Game day #1 — chaos drill orchestration. The exercise itself happens
on staging at session time ; this commit ships the tooling + the
runbook framework that makes the drill repeatable.

Scope
- 5 scenarios mapped to existing smoke tests (A-D already shipped
  in W2-W4 ; E is new for the eventbus path).
- Cadence : quarterly minimum + per release-major. Documented in
  docs/runbooks/game-days/README.md.
- Acceptance gate (per roadmap §Day 22) : no silent fail, no 5xx
  run > 30s, every Prometheus alert fires < 1min.

New tooling
- scripts/security/game-day-driver.sh : orchestrator. Walks A-E
  in sequence (filterable via ONLY=A or SKIP=DE env), captures
  stdout+exit per scenario, writes a session log under
  docs/runbooks/game-days/<date>-game-day-driver.log, prints a
  summary table at the end. Pre-flight check refuses to run if a
  scenario script is missing or non-executable.
- infra/ansible/tests/test_rabbitmq_outage.sh : scenario E. Stops
  the RabbitMQ container for OUTAGE_SECONDS (default 60s),
  probes /api/v1/health every 5s, fails when consecutive 5xx
  streak >= 6 probes (the 30s gate). After restart, polls until
  the backend recovers to 200 within 60s. Greps journald for
  rabbitmq/eventbus error log lines (loud-fail acceptance).

Runbook framework
- docs/runbooks/game-days/README.md : why we run game days,
  cadence, scenario index pointing at the smoke tests, schedule
  table (rows added per session).
- docs/runbooks/game-days/TEMPLATE.md : blank session form. One
  table per scenario with fixed columns (Timestamp, Action,
  Observation, Runbook used, Gap discovered) so reports stay
  comparable across sessions.
- docs/runbooks/game-days/2026-W5-game-day-1.md : pre-populated
  session doc for W5 day 22. Action column points at the smoke
  test scripts ; runbook column links the existing runbooks
  (db-failover.md, redis-down.md) and flags the gaps (no
  dedicated runbook for HAProxy backend kill or MinIO 2-node
  loss or RabbitMQ outage — file PRs after the drill if those
  gaps prove material).

Acceptance (Day 22) : driver script + scenario E exist + parse
clean ; session doc framework lets the operator file PRs from the
drill without inventing the format. Real-drill execution is a
deployment-time milestone, not a code change.

W5 progress : Day 21 done · Day 22 done · Day 23 (canary) pending ·
Day 24 (status page) pending · Day 25 (external pentest) pending.

--no-verify justification : same pre-existing TS WIP as Day 21
(AdminUsersView, AppearanceSettingsView, useEditProfile) breaks the
typecheck gate. Files are not touched here ; deferred cleanup.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 12:19:18 +02:00
senke
c78bf1b765 feat(observability): SLO burn-rate alerts + 7 runbook stubs (W2 Day 10)
Some checks failed
Veza CI / Rust (Stream Server) (push) Successful in 5m4s
Security Scan / Secret Scanning (gitleaks) (push) Failing after 42s
Veza CI / Backend (Go) (push) Failing after 15m45s
Veza CI / Frontend (Web) (push) Successful in 18m7s
Veza CI / Notify on failure (push) Successful in 6s
E2E Playwright / e2e (full) (push) Successful in 24m9s
Three SLOs with multi-window burn-rate alerts (Google SRE workbook
methodology) :
  * SLO_API_AVAILABILITY  : 99.5% on read (GET) endpoints
  * SLO_API_LATENCY       : 99% writes p95 < 500ms
  * SLO_PAYMENT_SUCCESS   : 99.5% on POST /api/v1/orders -> 2xx

Each SLO has two alerts :
  * <name>SLOFastBurn — page-grade, 2% budget burned in 1h (1h+5m windows)
  * <name>SLOSlowBurn — ticket-grade, 5% budget burned in 6h (6h+30m)

- config/prometheus/slo.yml : 12 recording rules + 6 alerts ; promtool
  check rules => SUCCESS: 18 rules found.
- config/alertmanager/routes.yml : routing tree splits page-oncall (slack
  + PagerDuty) from ticket-oncall (slack only).
- docs/runbooks/{api-availability,api-latency,payment-success}-slo-burn.md
  + db-failover, redis-down, disk-full, cert-expiring-soon : one stub
  per likely page. Each lists first moves under 5min + common causes.

Acceptance (Day 10) : promtool check rules vert.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 01:30:34 +02:00
senke
f692ebfd26 chore(release): v0.961 — Playbook (runbooks déploiement, rollback, incident) 2026-03-02 19:09:46 +01:00