Compare commits
5 commits
55eeed495d
...
70df301823
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
70df301823 | ||
|
|
5759143e97 | ||
|
|
3123f26fd4 | ||
|
|
342d25b40f | ||
|
|
fc0264e0da |
24 changed files with 1362 additions and 0 deletions
88
docs/runbooks/game-days/2026-W5-game-day-1.md
Normal file
88
docs/runbooks/game-days/2026-W5-game-day-1.md
Normal file
|
|
@ -0,0 +1,88 @@
|
|||
# Game day session — 2026 W5 (game day #1)
|
||||
|
||||
> **Driver** : _to fill at session time_
|
||||
> **Observers** : _list at session time_
|
||||
> **Environment** : staging
|
||||
> **Goal** : verify the v1.0.9 runbooks (W2 Day 10) work end-to-end against the W2-W4 infrastructure (pg_auto_failover, HAProxy, Redis Sentinel, distributed MinIO, RabbitMQ).
|
||||
|
||||
This is the **first** scheduled game day of the project. The W5 acceptance gate is :
|
||||
|
||||
- No silent fail across the 5 scenarios.
|
||||
- Max consecutive 5xx run ≤ 30 s.
|
||||
- Every Prometheus alert fired ≤ 1 min after the inducing event.
|
||||
- Every scenario has a documented runbook (file the gap as a PR if missing).
|
||||
|
||||
The fields below are pre-populated where a value is known (the `Action` column maps to the smoke test script ; the `Runbook used` column maps to the existing runbook). The empty fields are for the operator to fill **as the session runs**.
|
||||
|
||||
## Pre-flight checklist
|
||||
|
||||
- [ ] All target services healthy at start (run `incus list` ; check Grafana "Veza API Overview" dashboard ; sample `GET /api/v1/health` returns 200)
|
||||
- [ ] On-call team notified in `#engineering` 1 h before kickoff so a real page doesn't surprise them
|
||||
- [ ] PagerDuty schedule overridden to silence pages on staging — or pre-agree test pages will be ack'd silently
|
||||
- [ ] Driver script ready : `bash scripts/security/game-day-driver.sh` (one-shot orchestrator)
|
||||
- [ ] Vault secrets exported in the shell env where the driver runs :
|
||||
- `REDIS_PASS` + `SENTINEL_PASS` (scenario C — `infra/ansible/group_vars/redis_ha.vault.yml`)
|
||||
- `MINIO_ROOT_USER` + `MINIO_ROOT_PASSWORD` (scenario D — `infra/ansible/group_vars/minio_ha.vault.yml`)
|
||||
|
||||
## Session log
|
||||
|
||||
### Scenario A — Postgres primary failover (RTO < 60 s)
|
||||
|
||||
| Field | Value |
|
||||
| ----------------- | --------------------------------------------------------------------------- |
|
||||
| Timestamp UTC | _to fill_ |
|
||||
| Action | `bash infra/ansible/tests/test_pg_failover.sh` |
|
||||
| Observation | _to fill : measured RTO, alert latency, any 5xx visible to the API tier_ |
|
||||
| Runbook used | [`db-failover.md`](../db-failover.md) |
|
||||
| Gap discovered | _to fill_ |
|
||||
|
||||
### Scenario B — HAProxy backend-api 1 fail-over
|
||||
|
||||
| Field | Value |
|
||||
| ----------------- | --------------------------------------------------------------------------- |
|
||||
| Timestamp UTC | _to fill_ |
|
||||
| Action | `bash infra/ansible/tests/test_backend_failover.sh` |
|
||||
| Observation | _to fill : how long HAProxy took to mark backend-api-1 DOWN, whether WS sessions reconnected to backend-api-2 cleanly_ |
|
||||
| Runbook used | _gap : no dedicated runbook ; HAProxy ops live in `infra/ansible/roles/haproxy/README.md`. File a PR if a true runbook is needed._ |
|
||||
| Gap discovered | _to fill_ |
|
||||
|
||||
### Scenario C — Redis Sentinel master promotion
|
||||
|
||||
| Field | Value |
|
||||
| ----------------- | --------------------------------------------------------------------------- |
|
||||
| Timestamp UTC | _to fill_ |
|
||||
| Action | `REDIS_PASS=… SENTINEL_PASS=… bash infra/ansible/tests/test_redis_failover.sh` |
|
||||
| Observation | _to fill : measured promotion time, whether chat WS sessions reconnected without message loss_ |
|
||||
| Runbook used | [`redis-down.md`](../redis-down.md) |
|
||||
| Gap discovered | _to fill_ |
|
||||
|
||||
### Scenario D — MinIO 2-node loss EC:2 reconstruction
|
||||
|
||||
| Field | Value |
|
||||
| ----------------- | --------------------------------------------------------------------------- |
|
||||
| Timestamp UTC | _to fill_ |
|
||||
| Action | `MINIO_ROOT_USER=… MINIO_ROOT_PASSWORD=… bash infra/ansible/tests/test_minio_resilience.sh` |
|
||||
| Observation | _to fill : checksum match across the 2-node-down window, self-heal duration after restart_ |
|
||||
| Runbook used | `infra/ansible/roles/minio_distributed/README.md` (operations section) |
|
||||
| Gap discovered | _to fill : a true runbook for "MinIO 2 nodes down" doesn't exist — file PR_ |
|
||||
|
||||
### Scenario E — RabbitMQ outage backend stays up (60 s)
|
||||
|
||||
| Field | Value |
|
||||
| ----------------- | --------------------------------------------------------------------------- |
|
||||
| Timestamp UTC | _to fill_ |
|
||||
| Action | `bash infra/ansible/tests/test_rabbitmq_outage.sh` |
|
||||
| Observation | _to fill : max consecutive 5xx streak, eventbus error log lines visible_ |
|
||||
| Runbook used | _gap : no dedicated runbook for RabbitMQ outage. File PR if real exercise reveals one is needed._ |
|
||||
| Gap discovered | _to fill_ |
|
||||
|
||||
## PRs filed from this session
|
||||
|
||||
To populate during / after the session :
|
||||
|
||||
- _branch / title — link_
|
||||
- _branch / title — link_
|
||||
|
||||
## Take-aways
|
||||
|
||||
_Free-form notes after the session : what surprised us, what will we change next time, what should be promoted from "implicit knowledge" to a proper runbook entry._
|
||||
45
docs/runbooks/game-days/README.md
Normal file
45
docs/runbooks/game-days/README.md
Normal file
|
|
@ -0,0 +1,45 @@
|
|||
# Game days
|
||||
|
||||
Quarterly chaos drill run on staging. The cadence is one **per quarter minimum**, plus one **per release-major** (v2.0, v2.1, ...). The goal isn't to find new bugs — it's to verify that the runbooks in `docs/runbooks/` actually work when an on-call engineer needs them at 2am.
|
||||
|
||||
## Why
|
||||
|
||||
- Production systems fail. Pretending they won't is how outages stretch from minutes to hours.
|
||||
- Runbooks rot. Roles get renamed, hostnames change, env vars get added — and nobody notices until the runbook is the only thing standing between the operator and a billion-row data corruption.
|
||||
- New team members need a low-stakes way to drive an incident. Game days are that.
|
||||
|
||||
## How
|
||||
|
||||
1. **Pick a date.** Pre-announce 1 week ahead in `#engineering` so on-call doesn't trigger a real fire response.
|
||||
2. **Run the driver** : `bash scripts/security/game-day-driver.sh`. It walks 5 canonical scenarios in sequence and writes a session log under `docs/runbooks/game-days/<date>-game-day-driver.log`.
|
||||
3. **Fill the session doc** : copy `TEMPLATE.md` to `<YYYY-MM-DD>.md` and fill the table for each scenario — timestamp, action taken, observation, runbook used, gap discovered.
|
||||
4. **File PRs for gaps.** One PR per fix : runbook update, alert tuning, code change. Cross-reference the session doc.
|
||||
|
||||
## Scenarios
|
||||
|
||||
The driver currently exercises 5 :
|
||||
|
||||
| ID | Scenario | Smoke test | Acceptance gate |
|
||||
| -- | ------------------------------------- | ---------------------------------------------- | -------------------------------------------- |
|
||||
| A | Postgres primary failover | `infra/ansible/tests/test_pg_failover.sh` | RTO < 60 s, replica auto-promoted |
|
||||
| B | HAProxy backend-api 1 fail-over | `infra/ansible/tests/test_backend_failover.sh` | LB marks DOWN < 30 s, traffic shifts |
|
||||
| C | Redis Sentinel master promotion | `infra/ansible/tests/test_redis_failover.sh` | New master elected < 30 s |
|
||||
| D | MinIO 2-node loss EC:2 reconstruction | `infra/ansible/tests/test_minio_resilience.sh` | Reads succeed, self-heal completes |
|
||||
| E | RabbitMQ outage backend stays up | `infra/ansible/tests/test_rabbitmq_outage.sh` | No 5xx run > 30 s, error logged loudly |
|
||||
|
||||
Add new scenarios as new failure modes get exposed. Edit `scripts/security/game-day-driver.sh` to register them in `SCENARIOS=` + the two associative arrays.
|
||||
|
||||
## Acceptance bar (pre-launch)
|
||||
|
||||
Per `docs/ROADMAP_V1.0_LAUNCH.md` §Day 22 :
|
||||
|
||||
- **No silent fail.** Every scenario surfaces SOME observable signal — alert, log, dashboard.
|
||||
- **No 5xx run > 30 s.** Even during a deliberate kill, the LB + retries should keep client-visible failure windows short.
|
||||
- **Each Prometheus alert fires < 1 min.** From the moment of failure to the first PagerDuty / Slack ping.
|
||||
|
||||
## Schedule
|
||||
|
||||
| Date | Driver | Session doc | Status |
|
||||
| ------------ | ------------------------- | --------------------------------------------------------- | ------ |
|
||||
| 2026-W5 | _name_ + role | [`2026-W5-game-day-1.md`](./2026-W5-game-day-1.md) | TBD |
|
||||
| 2026-Q3 | _tbd_ | _tbd_ | scheduled |
|
||||
85
docs/runbooks/game-days/TEMPLATE.md
Normal file
85
docs/runbooks/game-days/TEMPLATE.md
Normal file
|
|
@ -0,0 +1,85 @@
|
|||
# Game day session — `<YYYY-MM-DD>`
|
||||
|
||||
> **Driver** : `<name> (<role>)`
|
||||
> **Observers** : `<list>`
|
||||
> **Environment** : staging / lab / prod-canary
|
||||
> **Goal** : verify the runbooks in `docs/runbooks/` work end-to-end.
|
||||
|
||||
## Pre-flight
|
||||
|
||||
- [ ] All target services healthy at start (run `kubectl get pods` / `incus list` / Grafana cluster overview)
|
||||
- [ ] On-call team notified in `#engineering` 1 h before kickoff so a real page doesn't surprise them
|
||||
- [ ] PagerDuty schedule overridden to silence pages on the test environment (or pre-agree the test pages will be acknowledged silently)
|
||||
- [ ] Driver script ready : `bash scripts/security/game-day-driver.sh --help`
|
||||
|
||||
## Session log
|
||||
|
||||
For each scenario, fill the row immediately after running the smoke test.
|
||||
|
||||
### Scenario A — Postgres primary failover
|
||||
|
||||
| Field | Value |
|
||||
| ----------------- | --------------------------------------------------------------------------- |
|
||||
| Timestamp UTC | |
|
||||
| Action | `incus stop --force pgaf-primary` |
|
||||
| Observation | _e.g. failover took 38 s, no client-visible 5xx, alert `PostgresPrimaryUnreachable` fired in 25 s_ |
|
||||
| Runbook used | [`db-failover.md`](../db-failover.md) |
|
||||
| Gap discovered | _e.g. step 3 mentions a script that no longer exists — file PR to fix_ |
|
||||
|
||||
### Scenario B — HAProxy backend-api 1 fail-over
|
||||
|
||||
| Field | Value |
|
||||
| ----------------- | --------------------------------------------------------------------------- |
|
||||
| Timestamp UTC | |
|
||||
| Action | `incus stop --force backend-api-1` |
|
||||
| Observation | |
|
||||
| Runbook used | _add path here ; if no runbook exists this is a gap_ |
|
||||
| Gap discovered | |
|
||||
|
||||
### Scenario C — Redis Sentinel master promotion
|
||||
|
||||
| Field | Value |
|
||||
| ----------------- | --------------------------------------------------------------------------- |
|
||||
| Timestamp UTC | |
|
||||
| Action | `incus stop --force redis-1` (or whichever Sentinel reports as master) |
|
||||
| Observation | |
|
||||
| Runbook used | [`redis-down.md`](../redis-down.md) |
|
||||
| Gap discovered | |
|
||||
|
||||
### Scenario D — MinIO 2-node loss EC:2 reconstruction
|
||||
|
||||
| Field | Value |
|
||||
| ----------------- | --------------------------------------------------------------------------- |
|
||||
| Timestamp UTC | |
|
||||
| Action | `KILL_NODES="minio-2 minio-3" bash infra/ansible/tests/test_minio_resilience.sh` |
|
||||
| Observation | |
|
||||
| Runbook used | _add path ; nothing dedicated yet, open issue if needed_ |
|
||||
| Gap discovered | |
|
||||
|
||||
### Scenario E — RabbitMQ outage backend stays up
|
||||
|
||||
| Field | Value |
|
||||
| ----------------- | --------------------------------------------------------------------------- |
|
||||
| Timestamp UTC | |
|
||||
| Action | `incus stop --force rabbitmq` (60 s window) |
|
||||
| Observation | |
|
||||
| Runbook used | _add path ; W5 day 22 to write if missing_ |
|
||||
| Gap discovered | |
|
||||
|
||||
## Acceptance gate
|
||||
|
||||
- [ ] No silent fail across the 5 scenarios
|
||||
- [ ] Max consecutive 5xx run ≤ 30 s
|
||||
- [ ] Every Prometheus alert fired ≤ 1 min after the inducing event
|
||||
- [ ] Every scenario has a documented runbook (file the gap as a PR if missing)
|
||||
|
||||
## PRs filed from this session
|
||||
|
||||
Track here so the next session knows what was actioned :
|
||||
|
||||
- `<branch> — <title>` — link
|
||||
- `<branch> — <title>` — link
|
||||
|
||||
## Take-aways
|
||||
|
||||
Free-form. What did we learn ? What surprised us ? What will we change for the next drill ?
|
||||
57
infra/ansible/roles/veza_app/README.md
Normal file
57
infra/ansible/roles/veza_app/README.md
Normal file
|
|
@ -0,0 +1,57 @@
|
|||
# `veza_app` role
|
||||
|
||||
Generic, parameterized role that deploys ONE Veza application
|
||||
component (`backend`, `stream`, or `web`) into a freshly-recreated
|
||||
Incus container, then probes its health endpoint. Driven from
|
||||
`playbooks/deploy_app.yml` once per component, per blue/green color
|
||||
in a deploy run.
|
||||
|
||||
## Why one role for three components?
|
||||
|
||||
The 80% of work is the same for each:
|
||||
|
||||
1. Recreate the Incus container from a profile (`incus delete --force`
|
||||
then `incus launch`).
|
||||
2. Apt-install OS deps.
|
||||
3. Pull the release tarball from the Forgejo Package Registry, extract.
|
||||
4. Render the env file from Vault-backed variables.
|
||||
5. Install a systemd unit (or, for `web`, an nginx site config).
|
||||
6. Start the service and probe its health endpoint.
|
||||
|
||||
The 20% deltas (binary name, port, OS deps, env-file shape, kind:
|
||||
binary vs static) live in `vars/<component>.yml`.
|
||||
|
||||
## Inputs
|
||||
|
||||
The caller (playbook) is expected to set:
|
||||
|
||||
| variable | required | meaning |
|
||||
| ----------------------- | -------- | ----------------------------------------------------------------------------- |
|
||||
| `veza_component` | yes | One of `backend`, `stream`, `web`. Drives `vars/<component>.yml` lookup. |
|
||||
| `veza_target_color` | yes | `blue` or `green`. The role recreates `<prefix><component>-<color>`. |
|
||||
| `veza_release_sha` | yes | Full git SHA of the release. Names the tarball + the install dir. |
|
||||
| `veza_container_prefix` | inherit | From `group_vars/<env>.yml`. e.g. `veza-staging-` or `veza-`. |
|
||||
| `veza_incus_host` | inherit | Inventory host that runs `incus exec`. |
|
||||
|
||||
Other parameters fall through `defaults/main.yml` (overridable per env
|
||||
in `group_vars/<env>.yml`).
|
||||
|
||||
## What the role does NOT do
|
||||
|
||||
- Switch HAProxy. That's the `veza_haproxy_switch` role, run after
|
||||
health probes pass for ALL three components.
|
||||
- Run database migrations. Those run once per deploy in a separate
|
||||
ephemeral `<prefix>backend-tools` container, before any color is
|
||||
recreated. See `playbooks/deploy_app.yml` Phase A.
|
||||
- Touch data containers (postgres, redis, rabbitmq, minio). Those
|
||||
go through `playbooks/deploy_data.yml`, with their own roles.
|
||||
|
||||
## Component matrix
|
||||
|
||||
| | backend (binary) | stream (binary) | web (static) |
|
||||
| ------------ | ----------------- | ----------------------- | ------------------------ |
|
||||
| binary | `veza-api` | `stream_server` | n/a — nginx serves dist |
|
||||
| port | 8080 | 8082 | 80 |
|
||||
| health path | `/api/v1/health` | `/health` | `/` |
|
||||
| extra deps | postgresql-client | (libssl3 in common set) | nginx |
|
||||
| service unit | yes (systemd) | yes (systemd) | no (nginx as systemd dep)|
|
||||
43
infra/ansible/roles/veza_app/defaults/main.yml
Normal file
43
infra/ansible/roles/veza_app/defaults/main.yml
Normal file
|
|
@ -0,0 +1,43 @@
|
|||
# veza_app role defaults — the small set of knobs every component
|
||||
# inherits unless overridden in group_vars/<env>.yml or vars/<component>.yml.
|
||||
#
|
||||
# Inputs ARE expected from the caller (see README.md for the required
|
||||
# list); these defaults only cover values that ARE NOT environment-
|
||||
# specific (paths, file modes, retry counts).
|
||||
---
|
||||
# These should be set by the caller — defaulting to false so a
|
||||
# misconfigured invocation fails loud instead of silently picking
|
||||
# `backend`.
|
||||
veza_component: ""
|
||||
veza_target_color: ""
|
||||
veza_release_sha: ""
|
||||
|
||||
# Paths in-container. Per-SHA install dir keeps multiple releases
|
||||
# coexistent for forensics: a failed deploy leaves the previous tree
|
||||
# on disk, recoverable via `incus exec ... -- ls /opt/veza/<component>/`.
|
||||
veza_app_install_dir: "{{ veza_install_root }}/{{ veza_component }}"
|
||||
veza_app_release_dir: "{{ veza_app_install_dir }}/{{ veza_release_sha }}"
|
||||
veza_app_current_link: "{{ veza_app_install_dir }}/current"
|
||||
|
||||
# System user that owns the install dir + runs the systemd service.
|
||||
# Per-component user prevents cross-process file leaks on a shared host.
|
||||
veza_app_user: "veza-{{ veza_component }}"
|
||||
veza_app_group: "veza-{{ veza_component }}"
|
||||
|
||||
# Mode bits used consistently across templates.
|
||||
veza_app_dir_mode: "0750"
|
||||
veza_app_file_mode: "0640"
|
||||
veza_app_secret_mode: "0400"
|
||||
veza_app_binary_mode: "0755"
|
||||
|
||||
# Container container, derived from inputs. Built once here so every
|
||||
# task references the same name without re-deriving.
|
||||
veza_app_container_name: "{{ veza_container_prefix }}{{ veza_component }}-{{ veza_target_color }}"
|
||||
|
||||
# URL to fetch the release tarball. Computed once per task chain.
|
||||
veza_app_artifact_url: "{{ veza_artifact_base_url }}/{{ veza_component }}/{{ veza_release_sha }}/veza-{{ veza_component }}-{{ veza_release_sha }}.tar.zst"
|
||||
|
||||
# How long to wait for the container's network namespace to come up
|
||||
# after `incus launch` before we start running tasks against it.
|
||||
# Debian 13 with a small profile is ready in ~3-5s; 30s is a safety net.
|
||||
veza_app_container_ready_timeout: 30
|
||||
24
infra/ansible/roles/veza_app/handlers/main.yml
Normal file
24
infra/ansible/roles/veza_app/handlers/main.yml
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
# veza_app handlers. Notified by tasks under config_*.yml when an env
|
||||
# file or systemd unit changes. Restart (not reload) for binary kinds
|
||||
# because Go/Rust services don't honor SIGHUP. Reload for nginx so
|
||||
# active connections drain.
|
||||
---
|
||||
- name: Reload systemd
|
||||
ansible.builtin.systemd:
|
||||
daemon_reload: true
|
||||
listen: "veza-app daemon-reload"
|
||||
|
||||
- name: Restart binary service
|
||||
ansible.builtin.systemd:
|
||||
name: "{{ veza_app_service_name }}"
|
||||
state: restarted
|
||||
daemon_reload: true
|
||||
listen: "veza-app restart"
|
||||
when: veza_app_kind == 'binary'
|
||||
|
||||
- name: Reload nginx
|
||||
ansible.builtin.systemd:
|
||||
name: nginx
|
||||
state: reloaded
|
||||
listen: "veza-app reload-nginx"
|
||||
when: veza_app_kind == 'static'
|
||||
15
infra/ansible/roles/veza_app/meta/main.yml
Normal file
15
infra/ansible/roles/veza_app/meta/main.yml
Normal file
|
|
@ -0,0 +1,15 @@
|
|||
---
|
||||
galaxy_info:
|
||||
role_name: veza_app
|
||||
author: Veza Ops
|
||||
description: >-
|
||||
Deploys one Veza application component (backend/stream/web) into a
|
||||
freshly-recreated Incus container. Driven from playbooks/deploy_app.yml
|
||||
once per component per blue/green color in a deploy run.
|
||||
license: proprietary
|
||||
min_ansible_version: "2.15"
|
||||
platforms:
|
||||
- name: Debian
|
||||
versions: ["13"]
|
||||
|
||||
dependencies: []
|
||||
83
infra/ansible/roles/veza_app/tasks/artifact.yml
Normal file
83
infra/ansible/roles/veza_app/tasks/artifact.yml
Normal file
|
|
@ -0,0 +1,83 @@
|
|||
# Pull the release tarball from the Forgejo Package Registry and
|
||||
# extract it under /opt/veza/<component>/<sha>/. Atomic via the
|
||||
# `current` symlink: nothing visible to the running service until
|
||||
# the symlink swap at the end. Idempotent: re-running this task with
|
||||
# the same SHA is a no-op once VERSION exists.
|
||||
---
|
||||
- name: Ensure veza_app system user exists
|
||||
ansible.builtin.user:
|
||||
name: "{{ veza_app_user }}"
|
||||
system: true
|
||||
shell: /usr/sbin/nologin
|
||||
home: "{{ veza_app_install_dir }}"
|
||||
create_home: false
|
||||
tags: [veza_app, artifact]
|
||||
|
||||
- name: Ensure install + log directories
|
||||
ansible.builtin.file:
|
||||
path: "{{ item }}"
|
||||
state: directory
|
||||
owner: "{{ veza_app_user }}"
|
||||
group: "{{ veza_app_group }}"
|
||||
mode: "0755"
|
||||
loop:
|
||||
- "{{ veza_app_install_dir }}"
|
||||
- "{{ veza_app_release_dir }}"
|
||||
- "{{ veza_log_root }}"
|
||||
tags: [veza_app, artifact]
|
||||
|
||||
- name: Fetch release tarball into /tmp
|
||||
ansible.builtin.get_url:
|
||||
url: "{{ veza_app_artifact_url }}"
|
||||
dest: "/tmp/veza-{{ veza_component }}-{{ veza_release_sha }}.tar.zst"
|
||||
mode: "0600"
|
||||
headers:
|
||||
Authorization: "token {{ vault_forgejo_registry_token | default('') }}"
|
||||
timeout: 60
|
||||
force: false # don't re-download if file already present (idempotency on retries)
|
||||
tags: [veza_app, artifact]
|
||||
|
||||
- name: Extract tarball into the per-SHA release dir
|
||||
ansible.builtin.unarchive:
|
||||
src: "/tmp/veza-{{ veza_component }}-{{ veza_release_sha }}.tar.zst"
|
||||
dest: "{{ veza_app_release_dir }}"
|
||||
remote_src: true
|
||||
owner: "{{ veza_app_user }}"
|
||||
group: "{{ veza_app_group }}"
|
||||
creates: "{{ veza_app_release_dir }}/VERSION"
|
||||
tags: [veza_app, artifact]
|
||||
|
||||
- name: Verify the binary landed (kind=binary only)
|
||||
ansible.builtin.stat:
|
||||
path: "{{ veza_app_release_dir }}/{{ veza_app_binary_name }}"
|
||||
register: binary_stat
|
||||
when: veza_app_kind == 'binary'
|
||||
tags: [veza_app, artifact]
|
||||
|
||||
- name: Fail fast if the binary is missing or not executable
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- binary_stat.stat.exists
|
||||
- binary_stat.stat.executable
|
||||
fail_msg: >-
|
||||
Tarball {{ veza_app_artifact_url }} extracted but
|
||||
{{ veza_app_binary_name }} is missing or not executable at
|
||||
{{ veza_app_release_dir }}. Tarball-build job is broken.
|
||||
when: veza_app_kind == 'binary'
|
||||
tags: [veza_app, artifact]
|
||||
|
||||
- name: Atomically swap the `current` symlink
|
||||
ansible.builtin.file:
|
||||
path: "{{ veza_app_current_link }}"
|
||||
src: "{{ veza_app_release_dir }}"
|
||||
state: link
|
||||
force: true
|
||||
owner: "{{ veza_app_user }}"
|
||||
group: "{{ veza_app_group }}"
|
||||
tags: [veza_app, artifact]
|
||||
|
||||
- name: Cleanup downloaded tarball
|
||||
ansible.builtin.file:
|
||||
path: "/tmp/veza-{{ veza_component }}-{{ veza_release_sha }}.tar.zst"
|
||||
state: absent
|
||||
tags: [veza_app, artifact]
|
||||
74
infra/ansible/roles/veza_app/tasks/config_binary.yml
Normal file
74
infra/ansible/roles/veza_app/tasks/config_binary.yml
Normal file
|
|
@ -0,0 +1,74 @@
|
|||
# Render env file + secret files + systemd unit, then start the
|
||||
# service. Used for kind=binary (backend, stream); the static-kind
|
||||
# equivalent is config_static.yml.
|
||||
---
|
||||
- name: Ensure /etc/veza exists for env + secret files
|
||||
ansible.builtin.file:
|
||||
path: "{{ veza_config_root }}"
|
||||
state: directory
|
||||
owner: root
|
||||
group: "{{ veza_app_group }}"
|
||||
mode: "0750"
|
||||
tags: [veza_app, config]
|
||||
|
||||
- name: Ensure /etc/veza/secrets exists (mode 0700)
|
||||
ansible.builtin.file:
|
||||
path: "{{ veza_config_root }}/secrets"
|
||||
state: directory
|
||||
owner: root
|
||||
group: "{{ veza_app_group }}"
|
||||
mode: "0750"
|
||||
tags: [veza_app, config]
|
||||
|
||||
- name: Render component env file from Vault
|
||||
ansible.builtin.template:
|
||||
src: "{{ veza_app_env_template }}"
|
||||
dest: "{{ veza_app_env_file }}"
|
||||
owner: root
|
||||
group: "{{ veza_app_group }}"
|
||||
mode: "{{ veza_app_file_mode }}"
|
||||
notify: "veza-app restart"
|
||||
tags: [veza_app, config]
|
||||
|
||||
# Render each secret file from Vault. `loop_control.label` masks the
|
||||
# value in playbook output even though `no_log: true` is set, defense
|
||||
# in depth.
|
||||
- name: Install secret files from Vault
|
||||
ansible.builtin.copy:
|
||||
content: >-
|
||||
{{ (lookup('vars', item.var) | b64decode)
|
||||
if item.decode | default('') == 'base64'
|
||||
else lookup('vars', item.var) }}
|
||||
dest: "{{ item.path }}"
|
||||
owner: "{{ veza_app_user }}"
|
||||
group: "{{ veza_app_group }}"
|
||||
mode: "{{ item.mode }}"
|
||||
loop: "{{ veza_app_secret_files }}"
|
||||
loop_control:
|
||||
label: "{{ item.path }}"
|
||||
no_log: true
|
||||
notify: "veza-app restart"
|
||||
tags: [veza_app, config, secrets]
|
||||
|
||||
- name: Render systemd unit
|
||||
ansible.builtin.template:
|
||||
src: "{{ veza_app_service_template }}"
|
||||
dest: "/etc/systemd/system/{{ veza_app_service_name }}.service"
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0644"
|
||||
notify:
|
||||
- "veza-app daemon-reload"
|
||||
- "veza-app restart"
|
||||
tags: [veza_app, config, service]
|
||||
|
||||
- name: Flush handlers so daemon-reload + restart happen before probe
|
||||
ansible.builtin.meta: flush_handlers
|
||||
tags: [veza_app, config, service]
|
||||
|
||||
- name: Enable + start the service
|
||||
ansible.builtin.systemd:
|
||||
name: "{{ veza_app_service_name }}"
|
||||
state: started
|
||||
enabled: true
|
||||
tags: [veza_app, service]
|
||||
34
infra/ansible/roles/veza_app/tasks/config_static.yml
Normal file
34
infra/ansible/roles/veza_app/tasks/config_static.yml
Normal file
|
|
@ -0,0 +1,34 @@
|
|||
# Render nginx config and reload it. Used for kind=static (web).
|
||||
# The dist/ tarball was already extracted under /var/www/veza-web/<sha>
|
||||
# by artifact.yml ; the only delta this task makes between deploys is
|
||||
# the symlink swap + nginx reload (the freshly-launched container
|
||||
# always reaches this task in a clean state, so the reload is mostly
|
||||
# defensive — first-run config render needs it).
|
||||
---
|
||||
- name: Disable the default nginx site so it never shadows ours
|
||||
ansible.builtin.file:
|
||||
path: /etc/nginx/sites-enabled/default
|
||||
state: absent
|
||||
tags: [veza_app, config]
|
||||
|
||||
- name: Render veza-web nginx site
|
||||
ansible.builtin.template:
|
||||
src: "{{ veza_app_nginx_template }}"
|
||||
dest: "{{ veza_app_nginx_site }}"
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0644"
|
||||
validate: "nginx -t -c /etc/nginx/nginx.conf -q"
|
||||
notify: "veza-app reload-nginx"
|
||||
tags: [veza_app, config]
|
||||
|
||||
- name: Flush handlers so nginx reloads before the probe
|
||||
ansible.builtin.meta: flush_handlers
|
||||
tags: [veza_app, config]
|
||||
|
||||
- name: Enable + start nginx
|
||||
ansible.builtin.systemd:
|
||||
name: nginx
|
||||
state: started
|
||||
enabled: true
|
||||
tags: [veza_app, service]
|
||||
24
infra/ansible/roles/veza_app/tasks/container.yml
Normal file
24
infra/ansible/roles/veza_app/tasks/container.yml
Normal file
|
|
@ -0,0 +1,24 @@
|
|||
# Reachability guard. The container is created (or destroyed-and-
|
||||
# recreated) by playbooks/deploy_app.yml ON THE INCUS HOST before the
|
||||
# role is invoked — by the time we run, the container exists and the
|
||||
# `community.general.incus` connection plugin is wired in inventory.
|
||||
# This task just smoke-tests the connection so a misconfigured run
|
||||
# fails on the first task instead of on apt halfway through.
|
||||
---
|
||||
- name: Verify the container is reachable via the connection plugin
|
||||
ansible.builtin.command: /bin/true
|
||||
changed_when: false
|
||||
tags: [veza_app, container]
|
||||
|
||||
- name: Record the SHA + color we are about to land
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ veza_state_root }}/release.txt"
|
||||
content: |
|
||||
component={{ veza_component }}
|
||||
color={{ veza_target_color }}
|
||||
sha={{ veza_release_sha }}
|
||||
deployed_at={{ ansible_date_time.iso8601 }}
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0644"
|
||||
tags: [veza_app, container]
|
||||
47
infra/ansible/roles/veza_app/tasks/main.yml
Normal file
47
infra/ansible/roles/veza_app/tasks/main.yml
Normal file
|
|
@ -0,0 +1,47 @@
|
|||
# veza_app — entry point. Loads component-specific vars, then
|
||||
# orchestrates container recreate → OS deps → artifact install →
|
||||
# config render → service start → health probe.
|
||||
#
|
||||
# Skeleton commit: this file dispatches to per-step files which are
|
||||
# stubbed in this commit and filled in subsequent commits (one per
|
||||
# component). Running this role today is a no-op beyond the var
|
||||
# include — playbooks/deploy_app.yml is the orchestrator that
|
||||
# eventually invokes the role for real.
|
||||
---
|
||||
- name: Validate required inputs
|
||||
ansible.builtin.assert:
|
||||
that:
|
||||
- veza_component in ['backend', 'stream', 'web']
|
||||
- veza_target_color in ['blue', 'green']
|
||||
- veza_release_sha | length == 40
|
||||
fail_msg: >-
|
||||
veza_app role requires veza_component (backend|stream|web),
|
||||
veza_target_color (blue|green), veza_release_sha (40-char git SHA).
|
||||
Got: component={{ veza_component }} color={{ veza_target_color }}
|
||||
sha={{ veza_release_sha }}.
|
||||
quiet: true
|
||||
tags: [veza_app, always]
|
||||
|
||||
- name: Load component-specific vars
|
||||
ansible.builtin.include_vars: "{{ veza_component }}.yml"
|
||||
tags: [veza_app, always]
|
||||
|
||||
- name: Recreate Incus container (delete-if-exists then launch)
|
||||
ansible.builtin.include_tasks: container.yml
|
||||
tags: [veza_app, container]
|
||||
|
||||
- name: Install OS dependencies
|
||||
ansible.builtin.include_tasks: os_deps.yml
|
||||
tags: [veza_app, packages]
|
||||
|
||||
- name: Fetch + extract release tarball
|
||||
ansible.builtin.include_tasks: artifact.yml
|
||||
tags: [veza_app, artifact]
|
||||
|
||||
- name: Render component config (env file + service unit | nginx site)
|
||||
ansible.builtin.include_tasks: "config_{{ veza_app_kind }}.yml"
|
||||
tags: [veza_app, config]
|
||||
|
||||
- name: Probe health endpoint
|
||||
ansible.builtin.include_tasks: probe.yml
|
||||
tags: [veza_app, probe]
|
||||
42
infra/ansible/roles/veza_app/tasks/os_deps.yml
Normal file
42
infra/ansible/roles/veza_app/tasks/os_deps.yml
Normal file
|
|
@ -0,0 +1,42 @@
|
|||
# Install OS deps inside the freshly-created container. Wait briefly
|
||||
# for cloud-init / debootstrap to finish first — apt locks held by
|
||||
# `unattended-upgrades` on first boot would race a parallel
|
||||
# `apt-get update`.
|
||||
---
|
||||
- name: Ensure /var/lib/veza state dir exists
|
||||
ansible.builtin.file:
|
||||
path: "{{ veza_state_root }}"
|
||||
state: directory
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0755"
|
||||
tags: [veza_app, packages]
|
||||
|
||||
- name: Wait for any first-boot apt lock to clear
|
||||
ansible.builtin.shell: |
|
||||
set -e
|
||||
for i in $(seq 1 30); do
|
||||
if ! fuser /var/lib/dpkg/lock-frontend >/dev/null 2>&1 && \
|
||||
! fuser /var/lib/apt/lists/lock >/dev/null 2>&1; then
|
||||
exit 0
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
echo "apt locks still held after 60s"
|
||||
exit 1
|
||||
args:
|
||||
executable: /bin/bash
|
||||
changed_when: false
|
||||
tags: [veza_app, packages]
|
||||
|
||||
- name: Refresh apt cache
|
||||
ansible.builtin.apt:
|
||||
update_cache: true
|
||||
cache_valid_time: 60
|
||||
tags: [veza_app, packages]
|
||||
|
||||
- name: Install OS packages (common + component-specific)
|
||||
ansible.builtin.apt:
|
||||
name: "{{ veza_common_os_packages + veza_app_extra_packages }}"
|
||||
state: present
|
||||
tags: [veza_app, packages]
|
||||
33
infra/ansible/roles/veza_app/tasks/probe.yml
Normal file
33
infra/ansible/roles/veza_app/tasks/probe.yml
Normal file
|
|
@ -0,0 +1,33 @@
|
|||
# Hammer the component's health endpoint until 200 or we exhaust the
|
||||
# retry budget. This runs INSIDE the container (curl-to-localhost),
|
||||
# which means we're proving the systemd unit is up and the process
|
||||
# is bound — not the Incus DNS / network path. Phase D in
|
||||
# playbooks/deploy_app.yml does the cross-container probe via curl
|
||||
# from the runner.
|
||||
---
|
||||
- name: Wait for {{ veza_app_service_name }} to answer on :{{ veza_app_listen_port }}{{ veza_app_health_path }}
|
||||
ansible.builtin.uri:
|
||||
url: "http://127.0.0.1:{{ veza_app_listen_port }}{{ veza_app_health_path }}"
|
||||
method: GET
|
||||
status_code: [200]
|
||||
return_content: false
|
||||
timeout: 5
|
||||
register: veza_app_probe
|
||||
retries: "{{ veza_healthcheck_retries }}"
|
||||
delay: "{{ veza_healthcheck_delay_seconds }}"
|
||||
until: veza_app_probe.status == 200
|
||||
changed_when: false
|
||||
tags: [veza_app, probe]
|
||||
|
||||
- name: Record probe success
|
||||
ansible.builtin.copy:
|
||||
dest: "{{ veza_state_root }}/last-probe.txt"
|
||||
content: |
|
||||
probed_at={{ ansible_date_time.iso8601 }}
|
||||
url=http://127.0.0.1:{{ veza_app_listen_port }}{{ veza_app_health_path }}
|
||||
sha={{ veza_release_sha }}
|
||||
result=ok
|
||||
owner: root
|
||||
group: root
|
||||
mode: "0644"
|
||||
tags: [veza_app, probe]
|
||||
86
infra/ansible/roles/veza_app/templates/backend.env.j2
Normal file
86
infra/ansible/roles/veza_app/templates/backend.env.j2
Normal file
|
|
@ -0,0 +1,86 @@
|
|||
# Managed by Ansible — do not edit by hand. veza_app role,
|
||||
# templates/backend.env.j2 ; rendered fresh on every deploy.
|
||||
# Sourced by /etc/systemd/system/veza-backend.service via EnvironmentFile=.
|
||||
|
||||
# --- Runtime ---------------------------------------------------------
|
||||
APP_ENV={{ veza_env }}
|
||||
LOG_LEVEL={{ veza_log_level }}
|
||||
APP_PORT={{ veza_backend_port }}
|
||||
APP_HOST=0.0.0.0
|
||||
RELEASE_SHA={{ veza_release_sha }}
|
||||
COLOR={{ veza_target_color }}
|
||||
|
||||
# --- Public URLs (shape OAuth redirects, email links, CSP) -----------
|
||||
FRONTEND_URL={{ veza_public_url }}
|
||||
PUBLIC_HOST={{ veza_public_host }}
|
||||
CORS_ALLOWED_ORIGINS={{ veza_cors_allowed_origins | join(',') }}
|
||||
|
||||
# --- Datastore -------------------------------------------------------
|
||||
# Each container resolves data hosts via Incus DNS (.lxd suffix).
|
||||
# postgres-primary is the writable side ; pgbouncer fronts it.
|
||||
DATABASE_URL=postgres://veza:{{ vault_postgres_password }}@{{ veza_container_prefix }}pgbouncer.{{ veza_incus_dns_suffix }}:6432/veza?sslmode=require
|
||||
DB_HOST={{ veza_container_prefix }}pgbouncer.{{ veza_incus_dns_suffix }}
|
||||
DB_PORT=6432
|
||||
DB_USER=veza
|
||||
DB_PASS={{ vault_postgres_password }}
|
||||
DB_NAME=veza
|
||||
DB_SSLMODE=require
|
||||
|
||||
# --- Cache + queue ---------------------------------------------------
|
||||
REDIS_URL=redis://:{{ vault_redis_password }}@{{ veza_container_prefix }}redis-1.{{ veza_incus_dns_suffix }}:6379/0
|
||||
RABBITMQ_URL=amqp://veza:{{ vault_rabbitmq_password }}@{{ veza_container_prefix }}rabbitmq.{{ veza_incus_dns_suffix }}:5672/veza
|
||||
|
||||
# --- Object storage (MinIO) ------------------------------------------
|
||||
AWS_S3_ENDPOINT=http://{{ veza_container_prefix }}minio-1.{{ veza_incus_dns_suffix }}:9000
|
||||
AWS_REGION=us-east-1
|
||||
AWS_ACCESS_KEY_ID={{ vault_minio_access_key }}
|
||||
AWS_SECRET_ACCESS_KEY={{ vault_minio_secret_key }}
|
||||
S3_BUCKET=veza-{{ veza_env }}
|
||||
|
||||
# --- JWT (RS256) -----------------------------------------------------
|
||||
JWT_PRIVATE_KEY_PATH={{ veza_config_root }}/secrets/jwt-private.pem
|
||||
JWT_PUBLIC_KEY_PATH={{ veza_config_root }}/secrets/jwt-public.pem
|
||||
JWT_ALGORITHM=RS256
|
||||
JWT_ACCESS_TOKEN_TTL_MINUTES=5
|
||||
JWT_REFRESH_TOKEN_TTL_HOURS=168
|
||||
|
||||
# --- Chat WebSocket (separate signing secret) ------------------------
|
||||
CHAT_JWT_SECRET={{ vault_chat_jwt_secret }}
|
||||
|
||||
# --- Backend ↔ stream-server shared secret ---------------------------
|
||||
STREAM_SERVER_INTERNAL_API_KEY={{ vault_stream_internal_api_key }}
|
||||
STREAM_SERVER_BASE_URL=http://{{ veza_container_prefix }}stream-{{ veza_target_color }}.{{ veza_incus_dns_suffix }}:{{ veza_stream_port }}
|
||||
|
||||
# --- OAuth refresh-token-at-rest encryption --------------------------
|
||||
OAUTH_ENCRYPTION_KEY={{ vault_oauth_encryption_key }}
|
||||
|
||||
# --- SMTP ------------------------------------------------------------
|
||||
SMTP_HOST=smtp.veza.fr
|
||||
SMTP_PORT=587
|
||||
SMTP_USER=ops@veza.fr
|
||||
SMTP_PASSWORD={{ vault_smtp_password }}
|
||||
SMTP_FROM=noreply@veza.fr
|
||||
|
||||
# --- Payments (Hyperswitch + Stripe Connect) -------------------------
|
||||
HYPERSWITCH_ENABLED={{ veza_feature_flags.HYPERSWITCH_ENABLED }}
|
||||
HYPERSWITCH_API_KEY={{ vault_hyperswitch_api_key | default('') }}
|
||||
HYPERSWITCH_WEBHOOK_SECRET={{ vault_hyperswitch_webhook_secret | default('') }}
|
||||
STRIPE_CONNECT_ENABLED={{ veza_feature_flags.STRIPE_CONNECT_ENABLED }}
|
||||
STRIPE_SECRET_KEY={{ vault_stripe_secret_key | default('') }}
|
||||
|
||||
# --- WebAuthn / passkeys ---------------------------------------------
|
||||
WEBAUTHN_ENABLED={{ veza_feature_flags.WEBAUTHN_ENABLED }}
|
||||
WEBAUTHN_RP_ID={{ veza_public_host }}
|
||||
WEBAUTHN_RP_NAME=Veza
|
||||
|
||||
# --- Observability ---------------------------------------------------
|
||||
SENTRY_DSN={{ vault_sentry_dsn | default('') }}
|
||||
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector.{{ veza_incus_dns_suffix }}:4317
|
||||
OTEL_SERVICE_NAME=veza-backend
|
||||
OTEL_TRACES_SAMPLER=parentbased_traceidratio
|
||||
OTEL_TRACES_SAMPLER_ARG={{ veza_otel_sample_rate }}
|
||||
|
||||
# --- Migrations ------------------------------------------------------
|
||||
# Backend auto-migrates on boot. Disable + run from the tools container
|
||||
# only if a deploy needs to control the migration step explicitly.
|
||||
RUN_MIGRATIONS_ON_BOOT=true
|
||||
54
infra/ansible/roles/veza_app/templates/stream.env.j2
Normal file
54
infra/ansible/roles/veza_app/templates/stream.env.j2
Normal file
|
|
@ -0,0 +1,54 @@
|
|||
# Managed by Ansible — do not edit by hand. veza_app role,
|
||||
# templates/stream.env.j2 ; rendered fresh on every deploy.
|
||||
# Sourced by /etc/systemd/system/veza-stream.service via EnvironmentFile=.
|
||||
|
||||
# --- Runtime ---------------------------------------------------------
|
||||
APP_ENV={{ veza_env }}
|
||||
RUST_LOG={{ veza_log_level | lower }}
|
||||
PORT={{ veza_stream_port }}
|
||||
HOST=0.0.0.0
|
||||
RELEASE_SHA={{ veza_release_sha }}
|
||||
COLOR={{ veza_target_color }}
|
||||
|
||||
# --- Required: stream server's symmetric secret (≥32 chars). ---------
|
||||
# Reused for HMAC signing of HLS segment URLs + cache key salting.
|
||||
# Distinct from JWT signing — stream verifies tokens with the
|
||||
# backend's RS256 public key (path below) but signs its own
|
||||
# short-lived stream URLs with this.
|
||||
SECRET_KEY={{ vault_stream_internal_api_key }}
|
||||
|
||||
# --- Backend ↔ stream shared secret -----------------------------------
|
||||
# Same value the backend stamps in X-Internal-API-Key for /api/v1/internal/*.
|
||||
# Stream rejects internal calls without a matching header.
|
||||
INTERNAL_API_KEY={{ vault_stream_internal_api_key }}
|
||||
BACKEND_BASE_URL=http://{{ veza_container_prefix }}backend-{{ veza_target_color }}.{{ veza_incus_dns_suffix }}:{{ veza_backend_port }}
|
||||
|
||||
# --- JWT verification (RS256 public key only — stream never signs) ---
|
||||
JWT_PUBLIC_KEY_PATH={{ veza_config_root }}/secrets/jwt-public.pem
|
||||
JWT_ALGORITHM=RS256
|
||||
|
||||
# --- Object storage (MinIO — pulls audio for transcode + HLS) -------
|
||||
S3_ENDPOINT=http://{{ veza_container_prefix }}minio-1.{{ veza_incus_dns_suffix }}:9000
|
||||
S3_REGION=us-east-1
|
||||
S3_ACCESS_KEY={{ vault_minio_access_key }}
|
||||
S3_SECRET_KEY={{ vault_minio_secret_key }}
|
||||
S3_BUCKET=veza-{{ veza_env }}
|
||||
|
||||
# --- RabbitMQ (event bus — degraded mode tolerated, see lib.rs) ------
|
||||
RABBITMQ_URL=amqp://veza:{{ vault_rabbitmq_password }}@{{ veza_container_prefix }}rabbitmq.{{ veza_incus_dns_suffix }}:5672/veza
|
||||
|
||||
# --- Observability ---------------------------------------------------
|
||||
SENTRY_DSN={{ vault_sentry_dsn | default('') }}
|
||||
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector.{{ veza_incus_dns_suffix }}:4317
|
||||
OTEL_SERVICE_NAME=veza-stream
|
||||
OTEL_TRACES_SAMPLER=parentbased_traceidratio
|
||||
OTEL_TRACES_SAMPLER_ARG={{ veza_otel_sample_rate }}
|
||||
OTEL_RESOURCE_ATTRIBUTES=deployment.environment={{ veza_env }},service.version={{ veza_release_sha[:12] }}
|
||||
|
||||
# --- Streaming-specific -----------------------------------------------
|
||||
# HLS segment cache lives under {{ veza_state_root }}/hls — sized small
|
||||
# (~500 MB) since MinIO is the source of truth and segments are
|
||||
# regenerated on miss.
|
||||
HLS_CACHE_DIR={{ veza_state_root }}/hls
|
||||
HLS_CACHE_MAX_BYTES=536870912
|
||||
HLS_SEGMENT_DURATION_SECONDS=6
|
||||
|
|
@ -0,0 +1,33 @@
|
|||
# Managed by Ansible — do not edit by hand.
|
||||
# veza_app role, templates/veza-backend.service.j2.
|
||||
# Released SHA: {{ veza_release_sha }} ; color: {{ veza_target_color }}
|
||||
[Unit]
|
||||
Description=Veza backend API (Go) — color {{ veza_target_color }}, sha {{ veza_release_sha[:12] }}
|
||||
Documentation=https://veza.fr/docs
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
AssertPathExists={{ veza_app_current_link }}/{{ veza_app_binary_name }}
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User={{ veza_app_user }}
|
||||
Group={{ veza_app_group }}
|
||||
EnvironmentFile=-{{ veza_app_env_file }}
|
||||
WorkingDirectory={{ veza_app_current_link }}
|
||||
ExecStart={{ veza_app_current_link }}/{{ veza_app_binary_name }}
|
||||
Restart=on-failure
|
||||
RestartSec=5s
|
||||
LimitNOFILE=65535
|
||||
|
||||
# Hardening — same baseline as the other Ansible-managed daemons.
|
||||
NoNewPrivileges=true
|
||||
ProtectSystem=strict
|
||||
ProtectHome=true
|
||||
ReadWritePaths={{ veza_app_install_dir }} {{ veza_log_root }} {{ veza_state_root }}
|
||||
PrivateTmp=true
|
||||
ProtectKernelTunables=true
|
||||
ProtectKernelModules=true
|
||||
ProtectControlGroups=true
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
|
@ -0,0 +1,35 @@
|
|||
# Managed by Ansible — do not edit by hand.
|
||||
# veza_app role, templates/veza-stream.service.j2.
|
||||
# Released SHA: {{ veza_release_sha }} ; color: {{ veza_target_color }}
|
||||
[Unit]
|
||||
Description=Veza stream server (Rust/Axum) — color {{ veza_target_color }}, sha {{ veza_release_sha[:12] }}
|
||||
Documentation=https://veza.fr/docs
|
||||
After=network-online.target
|
||||
Wants=network-online.target
|
||||
AssertPathExists={{ veza_app_current_link }}/{{ veza_app_binary_name }}
|
||||
|
||||
[Service]
|
||||
Type=simple
|
||||
User={{ veza_app_user }}
|
||||
Group={{ veza_app_group }}
|
||||
EnvironmentFile=-{{ veza_app_env_file }}
|
||||
WorkingDirectory={{ veza_app_current_link }}
|
||||
ExecStart={{ veza_app_current_link }}/{{ veza_app_binary_name }}
|
||||
Restart=on-failure
|
||||
RestartSec=5s
|
||||
# Stream server holds many WebSocket + HLS connections in flight ;
|
||||
# the default LimitNOFILE=1024 chokes around 200 concurrent listeners.
|
||||
LimitNOFILE=131072
|
||||
|
||||
# Hardening — same baseline as the backend.
|
||||
NoNewPrivileges=true
|
||||
ProtectSystem=strict
|
||||
ProtectHome=true
|
||||
ReadWritePaths={{ veza_app_install_dir }} {{ veza_log_root }} {{ veza_state_root }}
|
||||
PrivateTmp=true
|
||||
ProtectKernelTunables=true
|
||||
ProtectKernelModules=true
|
||||
ProtectControlGroups=true
|
||||
|
||||
[Install]
|
||||
WantedBy=multi-user.target
|
||||
|
|
@ -0,0 +1,78 @@
|
|||
# Managed by Ansible — do not edit by hand.
|
||||
# veza_app role, templates/veza-web-nginx.conf.j2.
|
||||
# Released SHA: {{ veza_release_sha }} ; color: {{ veza_target_color }}
|
||||
|
||||
# We are upstream of the global HAProxy — no TLS here, no rate limit,
|
||||
# no auth. Just serve dist/ with strong cache headers + the SPA
|
||||
# fallback (try_files ... /index.html) so client-side routes resolve
|
||||
# on hard reload.
|
||||
|
||||
server {
|
||||
listen {{ veza_web_port }} default_server;
|
||||
listen [::]:{{ veza_web_port }} default_server;
|
||||
server_name _;
|
||||
|
||||
root {{ veza_app_current_link }};
|
||||
index index.html;
|
||||
|
||||
# Health endpoint HAProxy checks. Returns 200 + a one-byte body so
|
||||
# we never accidentally rely on the 200 having content. Path /health
|
||||
# would conflict with backend's /health, but `/` is what veza_web
|
||||
# is checked on per veza_healthcheck_paths.web — kept here as a
|
||||
# simple internal probe target if ops wants to bypass the SPA.
|
||||
location = /__nginx_alive {
|
||||
access_log off;
|
||||
return 200 "ok\n";
|
||||
default_type text/plain;
|
||||
}
|
||||
|
||||
# Long-cache the immutable hashed bundles (Vite emits content-
|
||||
# hashed filenames in assets/). 1 year + immutable.
|
||||
location /assets/ {
|
||||
expires 1y;
|
||||
add_header Cache-Control "public, immutable";
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
# Service worker — must NEVER be long-cached or PWA updates
|
||||
# stall on stale clients.
|
||||
location = /sw.js {
|
||||
expires off;
|
||||
add_header Cache-Control "no-cache, no-store, must-revalidate";
|
||||
try_files $uri =404;
|
||||
}
|
||||
location = /workbox-config.js {
|
||||
expires off;
|
||||
add_header Cache-Control "no-cache, no-store, must-revalidate";
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
# Manifest, robots, favicon — short cache so SEO/PWA edits land
|
||||
# within ~minute of a deploy.
|
||||
location ~* \.(webmanifest|json|xml|txt|ico)$ {
|
||||
expires 5m;
|
||||
add_header Cache-Control "public, max-age=300";
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
# SPA fallback — every unknown route gets index.html so React
|
||||
# Router resolves it.
|
||||
location / {
|
||||
try_files $uri $uri/ /index.html;
|
||||
expires 5m;
|
||||
add_header Cache-Control "public, max-age=300";
|
||||
# Security headers — defense in depth ; HAProxy strips/adds
|
||||
# them upstream in prod.
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header Referrer-Policy "strict-origin-when-cross-origin" always;
|
||||
add_header X-Frame-Options "DENY" always;
|
||||
}
|
||||
|
||||
# Pre-compressed gzip assets if Vite emitted them
|
||||
gzip_static on;
|
||||
|
||||
# Errors lean on /index.html so a deep link reload doesn't show
|
||||
# nginx's default page.
|
||||
error_page 404 /index.html;
|
||||
error_page 500 502 503 504 /index.html;
|
||||
}
|
||||
39
infra/ansible/roles/veza_app/vars/backend.yml
Normal file
39
infra/ansible/roles/veza_app/vars/backend.yml
Normal file
|
|
@ -0,0 +1,39 @@
|
|||
# Backend (Go API) component vars — loaded by tasks/main.yml when
|
||||
# `veza_component == 'backend'`. Higher precedence than defaults/main.yml
|
||||
# so anything here wins.
|
||||
---
|
||||
veza_app_kind: binary
|
||||
veza_app_binary_name: veza-api
|
||||
veza_app_listen_port: "{{ veza_backend_port }}"
|
||||
veza_app_health_path: "{{ veza_healthcheck_paths.backend }}"
|
||||
|
||||
# Per-component env file consumed by the systemd unit's
|
||||
# EnvironmentFile= directive. The path lives outside install_dir so
|
||||
# rolling forward to a new release SHA doesn't require re-rendering.
|
||||
veza_app_env_file: "{{ veza_config_root }}/backend.env"
|
||||
veza_app_env_template: backend.env.j2
|
||||
veza_app_service_name: veza-backend
|
||||
veza_app_service_template: veza-backend.service.j2
|
||||
|
||||
# OS packages installed on top of veza_common_os_packages. Backend
|
||||
# embeds a libpq-style postgres client to feed migrate_tool when run
|
||||
# from inside this container (rare; usually migrations run from a
|
||||
# dedicated tools container — but having psql lets ops recover by
|
||||
# hand if the tools container is unavailable).
|
||||
veza_app_extra_packages:
|
||||
- postgresql-client
|
||||
- libssl3
|
||||
|
||||
# Secret files rendered to disk from Vault and referenced by the env
|
||||
# file via path-based env vars. Each entry is a triple (vault var
|
||||
# name | absolute path | mode). The role iterates over this list,
|
||||
# decoding base64 before write where the source is known to be PEM.
|
||||
veza_app_secret_files:
|
||||
- var: vault_jwt_signing_key_b64
|
||||
path: "{{ veza_config_root }}/secrets/jwt-private.pem"
|
||||
mode: "0400"
|
||||
decode: base64
|
||||
- var: vault_jwt_public_key_b64
|
||||
path: "{{ veza_config_root }}/secrets/jwt-public.pem"
|
||||
mode: "0440"
|
||||
decode: base64
|
||||
27
infra/ansible/roles/veza_app/vars/stream.yml
Normal file
27
infra/ansible/roles/veza_app/vars/stream.yml
Normal file
|
|
@ -0,0 +1,27 @@
|
|||
# Stream server (Rust) component vars.
|
||||
---
|
||||
veza_app_kind: binary
|
||||
veza_app_binary_name: stream_server
|
||||
veza_app_listen_port: "{{ veza_stream_port }}"
|
||||
veza_app_health_path: "{{ veza_healthcheck_paths.stream }}"
|
||||
|
||||
veza_app_env_file: "{{ veza_config_root }}/stream.env"
|
||||
veza_app_env_template: stream.env.j2
|
||||
veza_app_service_name: veza-stream
|
||||
veza_app_service_template: veza-stream.service.j2
|
||||
|
||||
# The stream server is a self-contained musl-static binary, so the
|
||||
# only OS deps are the common set + libssl for outbound TLS to MinIO.
|
||||
# (libssl3 is technically already in the common set on Debian 13;
|
||||
# listing it here is explicit so a future common-set trim doesn't
|
||||
# break stream silently.)
|
||||
veza_app_extra_packages:
|
||||
- libssl3
|
||||
|
||||
# Stream's only secret is the JWT public key (used to verify access
|
||||
# tokens issued by the backend). No private key — stream never signs.
|
||||
veza_app_secret_files:
|
||||
- var: vault_jwt_public_key_b64
|
||||
path: "{{ veza_config_root }}/secrets/jwt-public.pem"
|
||||
mode: "0440"
|
||||
decode: base64
|
||||
33
infra/ansible/roles/veza_app/vars/web.yml
Normal file
33
infra/ansible/roles/veza_app/vars/web.yml
Normal file
|
|
@ -0,0 +1,33 @@
|
|||
# Frontend (React/Vite, static SPA served by nginx) component vars.
|
||||
# Different shape from backend/stream: no custom binary, no env file,
|
||||
# no systemd unit owned by Veza — just a tarball of static files
|
||||
# extracted under nginx's docroot.
|
||||
---
|
||||
veza_app_kind: static
|
||||
veza_app_listen_port: "{{ veza_web_port }}"
|
||||
veza_app_health_path: "{{ veza_healthcheck_paths.web }}"
|
||||
|
||||
# Where the SPA's `dist/` lands. Per-SHA dir is symlinked-to by
|
||||
# /var/www/veza-web/current; nginx points at the symlink so a switch
|
||||
# is one symlink + one nginx -s reload (out of scope for this role —
|
||||
# the role recreates the container so nginx starts fresh anyway).
|
||||
veza_app_install_dir: /var/www/veza-web
|
||||
veza_app_release_dir: "{{ veza_app_install_dir }}/{{ veza_release_sha }}"
|
||||
veza_app_current_link: "{{ veza_app_install_dir }}/current"
|
||||
|
||||
# nginx site config — render and drop into sites-enabled/.
|
||||
veza_app_nginx_site: /etc/nginx/sites-enabled/veza-web.conf
|
||||
veza_app_nginx_template: veza-web-nginx.conf.j2
|
||||
|
||||
# nginx is THE service for this component. We don't ship a custom
|
||||
# systemd unit; we ensure nginx is enabled+started + has a clean
|
||||
# config.
|
||||
veza_app_service_name: nginx
|
||||
|
||||
veza_app_extra_packages:
|
||||
- nginx
|
||||
|
||||
# Frontend has no Vault secrets at runtime — every value bakes into
|
||||
# the bundle at build time via VITE_* env vars. Empty list means the
|
||||
# secret-file install task is a no-op.
|
||||
veza_app_secret_files: []
|
||||
135
infra/ansible/tests/test_rabbitmq_outage.sh
Executable file
135
infra/ansible/tests/test_rabbitmq_outage.sh
Executable file
|
|
@ -0,0 +1,135 @@
|
|||
#!/usr/bin/env bash
|
||||
# test_rabbitmq_outage.sh — Game day scenario E.
|
||||
#
|
||||
# Verifies the backend's RabbitMQ-down behaviour matches the contract :
|
||||
# - the API stays up (no 5xx on /api/v1/health)
|
||||
# - the eventbus logs ERROR (loud, not silent) when publishes fail
|
||||
# - the API recovers cleanly when RabbitMQ comes back
|
||||
#
|
||||
# v1.0.9 W5 Day 22.
|
||||
#
|
||||
# Usage :
|
||||
# bash infra/ansible/tests/test_rabbitmq_outage.sh
|
||||
#
|
||||
# Exit codes :
|
||||
# 0 — backend stayed up + recovered + error logged loudly
|
||||
# 1 — backend wasn't healthy at start
|
||||
# 2 — observed silent fail, 5xx, or no error log during outage
|
||||
# 3 — required tool missing
|
||||
set -euo pipefail
|
||||
|
||||
RABBITMQ_CONTAINER=${RABBITMQ_CONTAINER:-rabbitmq}
|
||||
BACKEND_HOST=${BACKEND_HOST:-haproxy.lxd}
|
||||
BACKEND_PORT=${BACKEND_PORT:-80}
|
||||
HEALTH_PATH=${HEALTH_PATH:-/api/v1/health}
|
||||
OUTAGE_SECONDS=${OUTAGE_SECONDS:-60} # default 60s ; roadmap says 30 min in prod drill
|
||||
|
||||
log() { printf '[%s] %s\n' "$(date +%H:%M:%S)" "$*" >&2; }
|
||||
fail() { log "FAIL: $*"; exit "${2:-2}"; }
|
||||
|
||||
require() {
|
||||
command -v "$1" >/dev/null 2>&1 || fail "required tool missing: $1" 3
|
||||
}
|
||||
|
||||
require incus
|
||||
require curl
|
||||
require date
|
||||
|
||||
curl_health() {
|
||||
curl --max-time 5 -sS -o /dev/null -w "%{http_code}" \
|
||||
"http://${BACKEND_HOST}:${BACKEND_PORT}${HEALTH_PATH}" || echo "000"
|
||||
}
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# 0. Pre-flight — backend healthy, RabbitMQ container running.
|
||||
# -----------------------------------------------------------------------------
|
||||
log "step 0: pre-flight"
|
||||
status=$(curl_health)
|
||||
log " backend health : HTTP $status"
|
||||
if [ "$status" != "200" ]; then
|
||||
fail "backend not healthy at start ($status), aborting" 1
|
||||
fi
|
||||
if ! incus info "$RABBITMQ_CONTAINER" >/dev/null 2>&1; then
|
||||
fail "rabbitmq container '$RABBITMQ_CONTAINER' not found" 1
|
||||
fi
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# 1. Take RabbitMQ down for OUTAGE_SECONDS.
|
||||
# -----------------------------------------------------------------------------
|
||||
log "step 1: stopping $RABBITMQ_CONTAINER for ${OUTAGE_SECONDS}s"
|
||||
incus stop --force "$RABBITMQ_CONTAINER"
|
||||
t0=$(date +%s)
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# 2. Probe the backend every 5s during the outage. Acceptance gate :
|
||||
# no run of 5xx longer than 30s.
|
||||
# -----------------------------------------------------------------------------
|
||||
log "step 2: probing backend during the outage"
|
||||
five_xx_streak=0
|
||||
five_xx_max_streak=0
|
||||
deadline=$((t0 + OUTAGE_SECONDS))
|
||||
while [ "$(date +%s)" -lt "$deadline" ]; do
|
||||
s=$(curl_health)
|
||||
if [ "$s" -ge 500 ] 2>/dev/null; then
|
||||
five_xx_streak=$((five_xx_streak + 1))
|
||||
if [ "$five_xx_streak" -gt "$five_xx_max_streak" ]; then
|
||||
five_xx_max_streak=$five_xx_streak
|
||||
fi
|
||||
else
|
||||
five_xx_streak=0
|
||||
fi
|
||||
log " [t+$(($(date +%s) - t0))s] backend HTTP $s (5xx streak=$five_xx_streak)"
|
||||
sleep 5
|
||||
done
|
||||
|
||||
# 30s = 6 consecutive 5s probes. If max_streak >= 6 we exceeded the gate.
|
||||
if [ "$five_xx_max_streak" -ge 6 ]; then
|
||||
log "FAIL gate : observed $((five_xx_max_streak * 5))s of consecutive 5xx during outage (cap is 30s)"
|
||||
log "step 3: restarting $RABBITMQ_CONTAINER (cleanup)"
|
||||
incus start "$RABBITMQ_CONTAINER" || true
|
||||
exit 2
|
||||
fi
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# 3. Verify the backend logged an ERROR for the rabbit failure (loud, not
|
||||
# silent). We grep journald on the haproxy container's view of the
|
||||
# api pool — assumes the API logs to journald via the systemd unit
|
||||
# rendered by the backend_api role.
|
||||
# -----------------------------------------------------------------------------
|
||||
log "step 3: looking for 'eventbus' / 'rabbitmq' ERROR log lines"
|
||||
err_count=$(incus exec backend-api-1 -- journalctl -u veza-backend-api --since="-${OUTAGE_SECONDS} seconds" 2>/dev/null \
|
||||
| grep -ciE "rabbitmq|eventbus" || true)
|
||||
log " matching log lines : $err_count"
|
||||
if [ "$err_count" -lt 1 ]; then
|
||||
log "WARN : no rabbitmq/eventbus ERROR log lines during the outage."
|
||||
log " Either the eventbus path went unused (no events published) or"
|
||||
log " failures are swallowed silently — open an issue if the latter."
|
||||
# Don't fail the whole test on this — depending on traffic during the
|
||||
# window the eventbus may not have been touched. Operator inspects.
|
||||
fi
|
||||
|
||||
# -----------------------------------------------------------------------------
|
||||
# 4. Restart RabbitMQ + verify backend recovers.
|
||||
# -----------------------------------------------------------------------------
|
||||
log "step 4: restarting $RABBITMQ_CONTAINER"
|
||||
incus start "$RABBITMQ_CONTAINER"
|
||||
|
||||
log " waiting up to 60s for backend to be 200 again"
|
||||
deadline=$(( $(date +%s) + 60 ))
|
||||
recovered=0
|
||||
while [ "$(date +%s)" -lt "$deadline" ]; do
|
||||
s=$(curl_health)
|
||||
if [ "$s" = "200" ]; then
|
||||
recovered=1
|
||||
break
|
||||
fi
|
||||
sleep 5
|
||||
done
|
||||
if [ "$recovered" -ne 1 ]; then
|
||||
fail "backend did not return to HTTP 200 within 60s after RabbitMQ recovery" 2
|
||||
fi
|
||||
|
||||
log "PASS : backend stayed available during ${OUTAGE_SECONDS}s RabbitMQ outage"
|
||||
log " max consecutive 5xx streak : ${five_xx_max_streak} probes ($((five_xx_max_streak * 5))s)"
|
||||
log " eventbus error log lines : $err_count"
|
||||
exit 0
|
||||
148
scripts/security/game-day-driver.sh
Executable file
148
scripts/security/game-day-driver.sh
Executable file
|
|
@ -0,0 +1,148 @@
|
|||
#!/usr/bin/env bash
|
||||
# game-day-driver.sh — orchestrate the W5 Day 22 game-day exercise.
|
||||
#
|
||||
# Walks the 5 failure scenarios in sequence, captures stdout/stderr +
|
||||
# exit code per scenario, writes a session report under
|
||||
# docs/runbooks/game-days/<DATE>-game-day-driver.log, and prints a
|
||||
# summary table at the end.
|
||||
#
|
||||
# v1.0.9 W5 Day 22.
|
||||
#
|
||||
# Scenarios (mapped to existing smoke tests) :
|
||||
# A : test_pg_failover.sh — kill Postgres primary, RTO < 60s
|
||||
# B : test_backend_failover.sh — kill backend-api 1, HAProxy bascule
|
||||
# C : test_redis_failover.sh — kill Redis master, Sentinel promote
|
||||
# D : test_minio_resilience.sh — kill 2 MinIO nodes, EC:2 reconstructs
|
||||
# E : test_rabbitmq_outage.sh — stop RabbitMQ 60s, backend stays up
|
||||
#
|
||||
# Usage :
|
||||
# bash scripts/security/game-day-driver.sh # run all scenarios
|
||||
# SKIP=DE bash scripts/security/game-day-driver.sh # skip scenarios D + E
|
||||
# ONLY=A bash scripts/security/game-day-driver.sh # only run scenario A
|
||||
#
|
||||
# Required env (passed through to the underlying smoke tests) :
|
||||
# REDIS_PASS / SENTINEL_PASS for scenario C
|
||||
# MINIO_ROOT_USER / MINIO_ROOT_PASSWORD for scenario D
|
||||
#
|
||||
# Exit codes :
|
||||
# 0 — every selected scenario passed
|
||||
# 1 — at least one scenario failed
|
||||
# 2 — runner pre-flight failed (script missing, etc.)
|
||||
set -euo pipefail
|
||||
|
||||
REPO_ROOT="$(cd "$(dirname "$0")/../.." && pwd)"
|
||||
TESTS_DIR="$REPO_ROOT/infra/ansible/tests"
|
||||
LOGS_DIR="$REPO_ROOT/docs/runbooks/game-days"
|
||||
SESSION_DATE="$(date +%Y-%m-%d-%H%M)"
|
||||
SESSION_LOG="$LOGS_DIR/$SESSION_DATE-game-day-driver.log"
|
||||
|
||||
mkdir -p "$LOGS_DIR"
|
||||
: > "$SESSION_LOG"
|
||||
|
||||
ONLY=${ONLY:-}
|
||||
SKIP=${SKIP:-}
|
||||
|
||||
log() { printf '[%s] %s\n' "$(date +%H:%M:%S)" "$*" | tee -a "$SESSION_LOG" >&2; }
|
||||
fail() { log "FAIL: $*"; exit "${2:-2}"; }
|
||||
|
||||
declare -A SCENARIO_SCRIPT=(
|
||||
[A]="$TESTS_DIR/test_pg_failover.sh"
|
||||
[B]="$TESTS_DIR/test_backend_failover.sh"
|
||||
[C]="$TESTS_DIR/test_redis_failover.sh"
|
||||
[D]="$TESTS_DIR/test_minio_resilience.sh"
|
||||
[E]="$TESTS_DIR/test_rabbitmq_outage.sh"
|
||||
)
|
||||
declare -A SCENARIO_DESC=(
|
||||
[A]="Postgres primary failover RTO < 60s"
|
||||
[B]="HAProxy backend-api 1 fail-over"
|
||||
[C]="Redis Sentinel master promotion"
|
||||
[D]="MinIO 2-node loss EC:2 reconstruction"
|
||||
[E]="RabbitMQ outage backend stays up"
|
||||
)
|
||||
SCENARIOS=(A B C D E)
|
||||
|
||||
want() {
|
||||
local s=$1
|
||||
if [ -n "$ONLY" ] && [[ "$ONLY" != *"$s"* ]]; then return 1; fi
|
||||
if [ -n "$SKIP" ] && [[ "$SKIP" == *"$s"* ]]; then return 1; fi
|
||||
return 0
|
||||
}
|
||||
|
||||
# Pre-flight : every selected scenario script must exist + be executable.
|
||||
for s in "${SCENARIOS[@]}"; do
|
||||
if want "$s"; then
|
||||
script="${SCENARIO_SCRIPT[$s]}"
|
||||
if [ ! -x "$script" ]; then
|
||||
fail "scenario $s : script $script not found or not executable" 2
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
declare -A SCENARIO_RESULT
|
||||
declare -A SCENARIO_DURATION
|
||||
|
||||
log "================================================================"
|
||||
log "Game day session : $SESSION_DATE"
|
||||
log "Session log : $SESSION_LOG"
|
||||
log "Scenarios run : ${SCENARIOS[*]}"
|
||||
[ -n "$ONLY" ] && log "ONLY filter : $ONLY"
|
||||
[ -n "$SKIP" ] && log "SKIP filter : $SKIP"
|
||||
log "================================================================"
|
||||
|
||||
for s in "${SCENARIOS[@]}"; do
|
||||
if ! want "$s"; then
|
||||
SCENARIO_RESULT[$s]="SKIPPED"
|
||||
SCENARIO_DURATION[$s]="-"
|
||||
continue
|
||||
fi
|
||||
|
||||
log ""
|
||||
log "── scenario $s : ${SCENARIO_DESC[$s]} ──────────────────────────"
|
||||
t0=$(date +%s)
|
||||
set +e
|
||||
"${SCENARIO_SCRIPT[$s]}" 2>&1 | tee -a "$SESSION_LOG"
|
||||
rc=${PIPESTATUS[0]}
|
||||
set -e
|
||||
elapsed=$(( $(date +%s) - t0 ))
|
||||
SCENARIO_DURATION[$s]="${elapsed}s"
|
||||
if [ "$rc" -eq 0 ]; then
|
||||
SCENARIO_RESULT[$s]="PASS"
|
||||
log "scenario $s : PASS in ${elapsed}s"
|
||||
else
|
||||
SCENARIO_RESULT[$s]="FAIL (exit $rc)"
|
||||
log "scenario $s : FAIL (exit $rc) after ${elapsed}s"
|
||||
fi
|
||||
done
|
||||
|
||||
log ""
|
||||
log "================================================================"
|
||||
log "Session summary"
|
||||
log "----------------------------------------------------------------"
|
||||
printf '%-3s | %-12s | %-8s | %s\n' "ID" "result" "duration" "scenario" | tee -a "$SESSION_LOG" >&2
|
||||
printf '%-3s-+-%-12s-+-%-8s-+-%s\n' "---" "------------" "--------" "$(printf '%.0s-' {1..50})" | tee -a "$SESSION_LOG" >&2
|
||||
overall=0
|
||||
for s in "${SCENARIOS[@]}"; do
|
||||
result=${SCENARIO_RESULT[$s]}
|
||||
duration=${SCENARIO_DURATION[$s]}
|
||||
printf '%-3s | %-12s | %-8s | %s\n' "$s" "$result" "$duration" "${SCENARIO_DESC[$s]}" \
|
||||
| tee -a "$SESSION_LOG" >&2
|
||||
if [[ "$result" == "FAIL"* ]]; then overall=1; fi
|
||||
done
|
||||
log "================================================================"
|
||||
log ""
|
||||
log "Operator next steps :"
|
||||
log " 1. Open the runbook template :"
|
||||
log " docs/runbooks/game-days/$SESSION_DATE.md"
|
||||
log " (copy from docs/runbooks/game-days/TEMPLATE.md if missing)"
|
||||
log " 2. For each scenario, fill : timestamp, action, observation,"
|
||||
log " runbook used, gap discovered."
|
||||
log " 3. File one PR per gap that needs a code or runbook fix."
|
||||
log ""
|
||||
|
||||
if [ "$overall" -eq 0 ]; then
|
||||
log "PASS : every selected scenario passed."
|
||||
else
|
||||
log "FAIL : at least one scenario failed — review $SESSION_LOG."
|
||||
fi
|
||||
|
||||
exit "$overall"
|
||||
Loading…
Reference in a new issue