No description
|
Some checks failed
Veza CI / Frontend (Web) (push) Failing after 16m6s
Veza CI / Notify on failure (push) Successful in 11s
E2E Playwright / e2e (full) (push) Successful in 19m59s
Veza CI / Rust (Stream Server) (push) Successful in 4m57s
Security Scan / Secret Scanning (gitleaks) (push) Successful in 49s
Veza CI / Backend (Go) (push) Successful in 6m4s
ROADMAP_V1.0_LAUNCH.md §Semaine 2 day 8 deliverable:
- Postgres backups land in MinIO via pgbackrest
- dr-drill restores them weekly into an ephemeral Incus container
and asserts the data round-trips
- Prometheus alerts fire when the drill fails OR when the timer
has stopped firing for >8 days
Cadence:
full — weekly (Sun 02:00 UTC, systemd timer)
diff — daily (Mon-Sat 02:00 UTC, systemd timer)
WAL — continuous (postgres archive_command, archive_timeout=60s)
drill — weekly (Sun 04:00 UTC — runs 2h after the Sun full so
the restore exercises fresh data)
RPO ≈ 1 min (archive_timeout). RTO ≤ 30 min (drill measures actual
restore wall-clock).
Files:
infra/ansible/roles/pgbackrest/
defaults/main.yml — repo1-* config (MinIO/S3, path-style,
aes-256-cbc encryption, vault-backed creds), retention 4 full
/ 7 diff / 4 archive cycles, zstd@3 compression. The role's
first task asserts the placeholder secrets are gone — refuses
to apply until the vault carries real keys.
tasks/main.yml — install pgbackrest, render
/etc/pgbackrest/pgbackrest.conf, set archive_command on the
postgres instance via ALTER SYSTEM, detect role at runtime
via `pg_autoctl show state --json`, stanza-create from primary
only, render + enable systemd timers (full + diff + drill).
templates/pgbackrest.conf.j2 — global + per-stanza sections;
pg1-path defaults to the pg_auto_failover state dir so the
role plugs straight into the Day 6 formation.
templates/pgbackrest-{full,diff,drill}.{service,timer}.j2 —
systemd units. Backup services run as `postgres`,
drill service runs as `root` (needs `incus`).
RandomizedDelaySec on every timer to absorb clock skew + node
collision risk.
README.md — RPO/RTO guarantees, vault setup, repo wiring,
operational cheatsheet (info / check / manual backup),
restore procedure documented separately as the dr-drill.
scripts/dr-drill.sh
Acceptance script for the day. Sequence:
0. pre-flight: required tools, latest backup metadata visible
1. launch ephemeral `pg-restore-drill` Incus container
2. install postgres + pgbackrest inside, push the SAME
pgbackrest.conf as the host (read-only against the bucket
by pgbackrest semantics — the same s3 keys get reused so
the drill exercises the production credential path)
3. `pgbackrest restore` — full + WAL replay
4. start postgres, wait for pg_isready
5. smoke query: SELECT count(*) FROM users — must be ≥ MIN_USERS_EXPECTED
6. write veza_backup_drill_* metrics to the textfile-collector
7. teardown (or --keep for postmortem inspection)
Exit codes 0/1/2 (pass / drill failure / env problem) so a
Prometheus runner can plug in directly.
config/prometheus/alert_rules.yml — new `veza_backup` group:
- BackupRestoreDrillFailed (critical, 5m): the last drill
reported success=0. Pages because a backup we haven't proved
restorable is dette technique waiting for a disaster.
- BackupRestoreDrillStale (warning, 1h after >8 days): the
drill timer has stopped firing. Catches a broken cron / unit
/ runner before the failure-mode alert above ever sees data.
Both annotations include a runbook_url stub
(veza.fr/runbooks/...) — those land alongside W2 day 10's
SLO runbook batch.
infra/ansible/playbooks/postgres_ha.yml
Two new plays:
6. apply pgbackrest role to postgres_ha_nodes (install +
config + full/diff timers on every data node;
pgbackrest's repo lock arbitrates collision)
7. install dr-drill on the incus_hosts group (push
/usr/local/bin/dr-drill.sh + render drill timer + ensure
/var/lib/node_exporter/textfile_collector exists)
Acceptance verified locally:
$ ansible-playbook -i inventory/lab.yml playbooks/postgres_ha.yml \
--syntax-check
playbook: playbooks/postgres_ha.yml ← clean
$ python3 -c "import yaml; yaml.safe_load(open('config/prometheus/alert_rules.yml'))"
YAML OK
$ bash -n scripts/dr-drill.sh
syntax OK
Real apply + drill needs the lab R720 + a populated MinIO bucket
+ the secrets in vault — operator's call.
Out of scope (deferred per ROADMAP §2):
- Off-site backup replica (B2 / Bunny.net) — v1.1+
- Logical export pipeline for RGPD per-user dumps — separate
feature track, not a backup-system concern
- PITR admin UI — CLI-only via `--type=time` for v1.0
- pgbackrest_exporter Prometheus integration — W2 day 9
alongside the OTel collector
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|---|---|---|
| .github | ||
| .husky | ||
| .zap | ||
| apps/web | ||
| chat_exports | ||
| config | ||
| dev-environment | ||
| docker/haproxy | ||
| docs | ||
| docs-assets/mermaid | ||
| fixtures | ||
| full_veza_audit_data | ||
| home/senke/git/talas/veza/apps/web/src | ||
| infra | ||
| k8s | ||
| loadtests | ||
| make | ||
| packages/design-system | ||
| prompts | ||
| proto | ||
| scripts | ||
| sub_task_agents | ||
| test-reports/20251226-132633 | ||
| tests | ||
| tmt | ||
| tools | ||
| veza-backend-api | ||
| veza-common | ||
| veza-docs | ||
| veza-stream-server | ||
| .commitlintrc.json | ||
| .cursorrules | ||
| .editorconfig | ||
| .gitattributes | ||
| .gitignore | ||
| .gitleaks.toml | ||
| .lighthouserc.js | ||
| .lintstagedrc.json | ||
| .nvmrc | ||
| .pa11yci.json | ||
| .semgrepignore | ||
| AUDIT_REPORT.md | ||
| CHANGELOG.md | ||
| CLAUDE.md | ||
| CONTRIBUTING.md | ||
| docker-compose.dev.yml | ||
| docker-compose.env.example | ||
| docker-compose.override.yml.example | ||
| docker-compose.prod.yml | ||
| docker-compose.staging.yml | ||
| docker-compose.test.yml | ||
| docker-compose.yml | ||
| env.remote-r720.example | ||
| FUNCTIONAL_AUDIT.md | ||
| go.work | ||
| go.work.sum | ||
| help | ||
| Makefile | ||
| package-lock.json | ||
| package.json | ||
| README.md | ||
| RELEASE_NOTES_V1.md | ||
| run-audit.sh | ||
| rust-toolchain.toml | ||
| status.sh | ||
| turbo.json | ||
| Untitled | ||
| VERSION | ||
| VEZA_VERSIONS_ROADMAP.md | ||
Veza Monorepo
Version courante : v1.0.4 (cleanup + consolidation post-audit). Voir CHANGELOG.md et docs/PROJECT_STATE.md.
Project Structure
apps/web— Frontend React 18 + Vite 5 + TypeScript strict (source of truth for the UI)veza-backend-api— Main Go 1.25 API service (Gin, GORM, Postgres, Redis, RabbitMQ, Elasticsearch). Handles REST, WebSocket, and chat (chat server was merged into this service in v0.502).veza-stream-server— Rust streaming server (Axum 0.8, Tokio 1.35, Symphonia) — HLS, HTTP Range, WebSocket, gRPCveza-common— Shared Rust types and loggingpackages/design-system— Shared design tokens
See CLAUDE.md for the full architecture map.
Development Setup
Prerequisites: Node 20 (see .nvmrc), Go, Rust, Docker. Configure .env from .env.example.
# Verify environment
make doctor
./scripts/validate-env.sh development
# Install dependencies
make install-deps
# Option A — Backend in Docker + Web local
make dev
# Option B — All apps local with hot reload (infra from docker-compose.dev.yml)
make dev-full
# Option C — Infra only, then run services manually
docker compose -f docker-compose.dev.yml up -d
make dev-web # or make dev-backend-api, make dev-stream-server
See docs/ENV_VARIABLES.md for required variables. make build builds all services.
Quick Start
Frontend only
cd apps/web
npm install
npm run dev
Docker Production
Canonical production compose file: docker-compose.prod.yml
docker compose -f docker-compose.prod.yml up -d
See make/config.mk for COMPOSE_PROD and deployment docs.
CI/CD
- Badge : CI status above. Set
SLACK_WEBHOOK_URL(Incoming Webhook) in repo secrets to receive Slack notifications on failure.
Disabled workflows
- Storybook (
chromatic.yml.disabled,storybook-audit.yml.disabled,visual-regression.yml.disabled): deferred until MSW is wired up for/api/v1/auth/meand/api/v1/logs/frontend, which currently causes ~1 400 network errors in the Storybook build. The npm scripts (storybook,build-storybook) still work locally for one-off component inspection. To reactivate in CI, fix the MSW handlers and rename the three files back to.yml.
Documentation
- Developer Onboarding — Setup, architecture, conventions, troubleshooting
- Documentation index — Index complet de la documentation
- See
docs/for detailed architecture and development guides. Older audits and reports are archived indocs/archive/.