veza/docs/ENV_VARIABLES.md

598 lines
34 KiB
Markdown
Raw Normal View History

2026-03-05 18:22:31 +00:00
# Variables d'environnement Veza
> **Référence canonique de toutes les variables lues par la plateforme Veza.**
> Sync effectué le 2026-04-23 — survey direct du code (`os.Getenv` en Go, `std::env::var` en Rust, `import.meta.env` côté React).
> **SSOT Go** : [`veza-backend-api/internal/config/config.go`](../veza-backend-api/internal/config/config.go) lignes 276465.
> **Templates** : [`veza-backend-api/.env.template`](../veza-backend-api/.env.template), [`veza-stream-server/.env.example`](../veza-stream-server/.env.example), [`docker-compose.env.example`](../docker-compose.env.example).
> **Documents liés** : [`ENV_CONFIG.md`](ENV_CONFIG.md) (setup dev veza.fr), [`ENVIRONMENT_REAL_SETUP.md`](ENVIRONMENT_REAL_SETUP.md) (walkthrough).
> **Règle** : quand tu ajoutes une variable dans le code, mets à jour ce fichier + `.env.template` dans le même commit. Référence audit : `AUDIT_REPORT.md` §9 item #15.
2026-03-05 18:22:31 +00:00
---
## Démarrage rapide — 5 variables minimum
2026-03-05 18:22:31 +00:00
Pour booter un backend dev (`make dev`) :
2026-03-05 18:22:31 +00:00
| Variable | Exemple | Pourquoi |
| --- | --- | --- |
| `DATABASE_URL` | `postgres://veza:password@veza.fr:15432/veza?sslmode=disable` | Postgres — boot échoue sans. |
| `REDIS_URL` | `redis://veza.fr:16379` | CSRF + rate-limit + cache. Pas de fallback silencieux en prod. |
| `JWT_SECRET` | ≥32 caractères aléatoires | HS256 fallback. Préférer `JWT_PRIVATE_KEY_PATH` + `JWT_PUBLIC_KEY_PATH` en prod. |
| `CORS_ALLOWED_ORIGINS` | `http://veza.fr:5173,http://veza.fr:3000` | Pas de wildcard en prod. |
| `FRONTEND_URL` | `http://veza.fr:5173` | OAuth redirects + liens emails. |
2026-03-05 18:22:31 +00:00
Tout le reste a un défaut raisonnable ou est opt-in.
2026-03-05 18:22:31 +00:00
---
2026-03-05 18:22:31 +00:00
## Table des matières
1. [Core / environnement](#1-core--environnement)
2. [Base de données](#2-base-de-données)
3. [Redis](#3-redis)
4. [JWT / authentification](#4-jwt--authentification)
5. [OAuth providers](#5-oauth-providers)
6. [CORS / cookies](#6-cors--cookies)
7. [Rate limiting](#7-rate-limiting)
8. [SMTP / email](#8-smtp--email)
9. [Hyperswitch / paiements](#9-hyperswitch--paiements)
10. [Stripe Connect / payouts créateurs](#10-stripe-connect--payouts-créateurs)
11. [RabbitMQ / event bus](#11-rabbitmq--event-bus)
12. [Stockage S3 / MinIO](#12-stockage-s3--minio)
13. [HLS streaming](#13-hls-streaming)
14. [Stream server (backend ↔ stream)](#14-stream-server-backend--stream)
15. [Elasticsearch](#15-elasticsearch)
16. [ClamAV / antivirus](#16-clamav--antivirus)
17. [Sentry / tracking d'erreurs](#17-sentry--tracking-derreurs)
18. [Logging](#18-logging)
19. [Monitoring / metrics](#19-monitoring--metrics)
20. [Frontend (Vite)](#20-frontend-vite)
21. [Feature flags](#21-feature-flags)
22. [Politique de mot de passe](#22-politique-de-mot-de-passe)
23. [Build / version info](#23-build--version-info)
24. [RTMP / Web Push / divers](#24-rtmp--web-push--divers)
25. [Stream server (Rust) — schéma propre](#25-stream-server-rust--schéma-propre)
26. [Security headers (rappel)](#26-security-headers-rappel)
27. [Variables dépréciées / legacy](#27-variables-dépréciées--legacy)
28. [Règles de validation production](#28-règles-de-validation-production)
29. [Drift template ↔ code](#29-drift-template--code)
feat(observability): OTel SDK + collector + Tempo + 4 hot path spans (W2 Day 9) Wires distributed tracing end-to-end. Backend exports OTLP/gRPC to a collector, which tail-samples (errors + slow always, 10% rest) and ships to Tempo. Grafana service-map dashboard pivots on the 4 instrumented hot paths. - internal/tracing/otlp_exporter.go : InitOTLPTracer + Provider.Shutdown, BatchSpanProcessor (5s/512 batch), ParentBased(TraceIDRatio) sampler, W3C trace-context + baggage propagators. OTEL_SDK_DISABLED=true short-circuits to a no-op. Failure to dial collector is non-fatal. - cmd/api/main.go : init at boot, defer Shutdown(5s) on exit. appVersion ldflag-overridable for resource attributes. - 4 hot paths instrumented : * handlers/auth.go::Login → "auth.login" * core/track/track_upload_handler.go::InitiateChunkedUpload → "track.upload.initiate" * core/marketplace/service.go::ProcessPaymentWebhook → "payment.webhook" * handlers/search_handlers.go::Search → "search.query" PII guarded — email masked, query content not recorded (length only). - infra/ansible/roles/otel_collector : pin v0.116.1 contrib build, systemd unit, tail-sampling config (errors + > 500ms always kept). - infra/ansible/roles/tempo : pin v2.7.1 monolithic, local-disk backend (S3 deferred to v1.1), 14d retention. - infra/ansible/playbooks/observability.yml : provisions both Incus containers + applies common baseline + roles in order. - inventory/lab.yml : new groups observability, otel_collectors, tempo. - config/grafana/dashboards/service-map.json : node graph + 4 hot-path span tables + collector throughput/queue panels. - docs/ENV_VARIABLES.md §30 : 4 OTEL_* env vars documented. Acceptance criterion (Day 9) : login → span visible in Tempo UI. Lab deployment to validate with `ansible-playbook -i inventory/lab.yml playbooks/observability.yml` once roles/postgres_ha is up. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 23:15:11 +00:00
30. [OpenTelemetry / distributed tracing](#30-opentelemetry--distributed-tracing-v109-day-9)
31. [Checklist de démarrage](#31-checklist-de-démarrage)
**Légende** : **variable en gras** = critique en production (validée au boot).
2026-03-05 18:22:31 +00:00
---
2026-03-05 18:22:31 +00:00
## 1. Core / environnement
2026-03-05 18:22:31 +00:00
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| **`APP_ENV`** | `development` | `config.go:276` | `development` / `staging` / `production`. Conditionne les validations et beaucoup de défauts. |
| `APP_DOMAIN` | `veza.fr` | `config.go:299` | Single source of truth pour les URLs dérivées en dev. `APP_DOMAIN=example.com` + `/etc/hosts` suffit. |
| `APP_PORT` | `8080` | `config.go:309` | Port HTTP. |
| `LOG_LEVEL` | `INFO` | `config.go:296` | `DEBUG` / `INFO` / `WARN` / `ERROR`. **`DEBUG` refusé en prod** (`validation.go:876`). |
| `GO_ENV`, `NODE_ENV` | — | legacy | Alias historiques de `APP_ENV`. Pas canoniques. |
2026-03-05 18:22:31 +00:00
## 2. Base de données
2026-03-05 18:22:31 +00:00
Hard requirement : `DATABASE_URL`. Pool par défaut tuné pour un dev mono-pod ; bump pour multi-replica.
2026-03-05 18:22:31 +00:00
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| **`DATABASE_URL`** | — | `config.go:320` | DSN Postgres. Boot échoue sans. |
| `DATABASE_READ_URL` | (vide) | `config.go:342` | DSN read-replica optionnel. Fallback sur `DATABASE_URL`. |
| `DB_MAX_OPEN_CONNS` | `50` | `database/db_init.go` | Pool : connexions concurrentes. |
| `DB_MAX_IDLE_CONNS` | `12` | `database/db_init.go` | Pool : idle. |
| `DB_MAX_IDLE_TIME` | `5m` | `database/db_init.go` | Timeout idle. |
| `DB_MAX_LIFETIME` | `10m` | `database/db_init.go` | Lifetime max avant recycle. |
| `DB_READ_MAX_OPEN_CONNS` | `25` | `database/db_init.go` | Pool read-replica : max open. |
| `DB_READ_MAX_IDLE_CONNS` | `6` | `database/db_init.go` | Pool read-replica : max idle. |
| `DB_MAX_RETRIES` | `5` | `config.go:380` | Retries sur échec connexion. |
| `DB_RETRY_INTERVAL` | `5s` | `config.go:381` | Délai entre retries. |
2026-03-05 18:22:31 +00:00
> **Drift** : le template utilise `DATABASE_MAX_OPEN_CONNS` / `DATABASE_MAX_IDLE_CONNS` / `DATABASE_CONN_MAX_LIFETIME` — **ces noms ne sont pas lus par le code**. Utiliser `DB_MAX_*`. Template corrigé en v1.1.0.
## 3. Redis
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| **`REDIS_URL`** | `redis://veza.fr:6379` | `config.go:338` | URL complète. **Doit être explicite en prod** (`validation.go:916`) — pas de fallback qui casserait multi-pod. |
| `REDIS_ENABLE` | `true` | `config.go:339` | Désactiver Redis désactive CSRF, rate-limit, cache. |
feat(redis): Sentinel HA + cache hit rate metrics (W3 Day 11) Three Incus containers, each running redis-server + redis-sentinel (co-located). redis-1 = master at first boot, redis-2/3 = replicas. Sentinel quorum=2 of 3 ; failover-timeout=30s satisfies the W3 acceptance criterion. - internal/config/redis_init.go : initRedis branches on REDIS_SENTINEL_ADDRS ; non-empty -> redis.NewFailoverClient with MasterName + SentinelAddrs + SentinelPassword. Empty -> existing single-instance NewClient (dev/local stays parametric). - internal/config/config.go : 3 new fields (RedisSentinelAddrs, RedisSentinelMasterName, RedisSentinelPassword) read from env. parseRedisSentinelAddrs trims+filters CSV. - internal/metrics/cache_hit_rate.go : new RecordCacheHit / Miss counters, labelled by subsystem. Cardinality bounded. - internal/middleware/rate_limiter.go : instrument 3 Eval call sites (DDoS, frontend log throttle, upload throttle). Hit = Redis answered, Miss = error -> in-memory fallback. - internal/services/chat_pubsub.go : instrument Publish + PublishPresence. - internal/websocket/chat/presence_service.go : instrument SetOnline / SetOffline / Heartbeat / GetPresence. redis.Nil counts as a hit (legitimate empty result). - infra/ansible/roles/redis_sentinel/ : install Redis 7 + Sentinel, render redis.conf + sentinel.conf, systemd units. Vault assertion prevents shipping placeholder passwords to staging/prod. - infra/ansible/playbooks/redis_sentinel.yml : provisions the 3 containers + applies common baseline + role. - infra/ansible/inventory/lab.yml : new groups redis_ha + redis_ha_master. - infra/ansible/tests/test_redis_failover.sh : kills the master container, polls Sentinel for the new master, asserts elapsed < 30s. - config/grafana/dashboards/redis-cache-overview.json : 3 hit-rate stats (rate_limiter / chat_pubsub / presence) + ops/s breakdown. - docs/ENV_VARIABLES.md §3 : 3 new REDIS_SENTINEL_* env vars. - veza-backend-api/.env.template : 3 placeholders (empty default). Acceptance (Day 11) : Sentinel failover < 30s ; cache hit-rate dashboard populated. Lab test pending Sentinel deployment. W3 verification gate progress : Redis Sentinel ✓ (this commit), MinIO EC4+2 ⏳ Day 12, CDN ⏳ Day 13, DMCA ⏳ Day 14, embed ⏳ Day 15. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 11:36:55 +00:00
| `REDIS_SENTINEL_ADDRS` | (none) | `redis_init.go` | v1.0.9 W3 Day 11. CSV de `host:port` Sentinels (ex. `redis-1.lxd:26379,redis-2.lxd:26379,redis-3.lxd:26379`). Si non vide, le backend utilise `redis.NewFailoverClient` au lieu d'un client direct ; `REDIS_URL` ne sert plus qu'à passer le password + DB index. |
| `REDIS_SENTINEL_MASTER_NAME` | `veza-master` | `redis_init.go` | Doit matcher la directive `sentinel monitor <name> ...` côté Sentinel. |
| `REDIS_SENTINEL_PASSWORD` | (vide) | `redis_init.go` | Auth Sentinel-to-Sentinel (séparée de `REDIS_PASSWORD` pour limiter le blast radius). |
`REDIS_ADDR`, `REDIS_PASSWORD`, `REDIS_DB` apparaissent encore dans le template mais **ne sont plus lus** — utiliser `REDIS_URL`. Voir [§27](#27-variables-dépréciées--legacy).
## 4. JWT / authentification
RS256 préféré (prod), HS256 fallback (dev). Clés générées par `scripts/generate-jwt-keys.sh`.
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `JWT_PRIVATE_KEY_PATH` | (vide) | `config.go:316` | Chemin clé privée RSA (PEM). Préféré en prod. |
| `JWT_PUBLIC_KEY_PATH` | (vide) | `config.go:317` | Chemin clé publique RSA. |
| `JWT_SECRET` | — | `config.go:318` | Fallback HS256. ≥32 chars. Requis si pas de RS256. |
| `JWT_ISSUER` | `veza-api` | `config.go:335` | Claim `iss`. |
| `JWT_AUDIENCE` | `veza-platform` | `config.go:336` | Claim `aud`. |
| **`CHAT_JWT_SECRET`** | fallback sur `JWT_SECRET` | `config.go:337` | Tokens chat WebSocket. **Doit différer de `JWT_SECRET` en prod** (`validation.go:891`). |
2026-03-05 18:27:34 +00:00
> `JWT_ACCESS_TOKEN_DURATION` et `JWT_REFRESH_TOKEN_DURATION` dans le template ne sont **pas lues** — durées hardcodées (access 5min, refresh 7j).
2026-03-05 18:27:34 +00:00
**Génération RS256** :
2026-03-05 18:27:34 +00:00
```bash
scripts/generate-jwt-keys.sh
# ou manuellement :
openssl genrsa -out jwt-private.pem 2048
openssl rsa -in jwt-private.pem -pubout -out jwt-public.pem
```
2026-03-05 18:27:34 +00:00
## 5. OAuth providers
Opt-in par provider : définir `CLIENT_ID` + `CLIENT_SECRET` pour activer.
| Variable | Rôle |
| --- | --- |
| **`OAUTH_ENCRYPTION_KEY`** | Clé AES-256 (≥32 bytes) pour chiffrer les refresh tokens OAuth au repos. **Requis en prod** (`validation.go:896`). |
| `OAUTH_ALLOWED_REDIRECT_DOMAINS` | Whitelist des domaines redirect URI. Défauts dérivés de `CORS_ALLOWED_ORIGINS`. |
| `OAUTH_GOOGLE_CLIENT_ID` / `_SECRET` | Google OAuth. |
| `OAUTH_GITHUB_CLIENT_ID` / `_SECRET` | GitHub OAuth. |
| `OAUTH_DISCORD_CLIENT_ID` / `_SECRET` | Discord OAuth. |
| `OAUTH_SPOTIFY_CLIENT_ID` / `_SECRET` | Spotify OAuth. |
Tout lu dans `veza-backend-api/internal/config/services_init.go`.
## 6. CORS / cookies
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| **`CORS_ALLOWED_ORIGINS`** | env-aware | `config.go:302` | Liste CSV. **Pas de wildcards en prod** (`validation.go:869`). |
| `COOKIE_SECURE` | `false` (dev), `true` (prod) | middleware/cors | Cookies HTTPS-only. |
| `COOKIE_SAME_SITE` | `lax` | `config.go:401` | `strict` / `lax` / `none`. |
| `COOKIE_DOMAIN` | (vide) | `config.go:402` | Vide = domaine courant. |
| `COOKIE_HTTP_ONLY` | `true` | `config.go:403` | Empêche accès JS. |
| `COOKIE_PATH` | `/` | `config.go:404` | Path cookie. |
## 7. Rate limiting
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `RATE_LIMIT_LIMIT` | env-aware | `config.go:305` | Requêtes par fenêtre. |
| `RATE_LIMIT_WINDOW` | `60` (secondes) | `config.go:306` | Taille fenêtre. |
| `RATE_LIMIT_IP_PER_HOUR` | `100` prod / `5000` dev | `middleware/rate_limit.go` | Plafond par IP (non-auth). |
| `RATE_LIMIT_USER_PER_HOUR` | `1000` prod / `2000` dev | `middleware/rate_limit.go` | Plafond par user (auth). |
| `AUTH_RATE_LIMIT_LOGIN_ATTEMPTS` | `5` prod / `100` test | `config.go:374` | Seuil lockout login échoué. |
| `AUTH_RATE_LIMIT_LOGIN_WINDOW` | `1` prod / `60` test (minutes) | `config.go:375` | Fenêtre lockout. |
| `ACCOUNT_LOCKOUT_EXEMPT_EMAILS` | (vide) | `config.go:376` | CSV d'emails exemptés. |
| `USER_RATE_LIMIT_BURST` | `100` | `middlewares_init.go:54` | Burst tokens par user (BE-SVC-002). |
| `USER_RATE_LIMIT_PER_MINUTE` | `1000` | `middlewares_init.go:54` | RPM par user. |
| `DISABLE_RATE_LIMIT_FOR_TESTS` | — | middleware | Bypass E2E (test only). |
## 8. SMTP / email
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `SMTP_HOST` | `localhost` | `mailer/sender.go:120` | MailHog par défaut en dev. |
| `SMTP_PORT` | `1025` | `mailer/sender.go:121` | MailHog. |
| `SMTP_USERNAME` | (vide) | `mailer/sender.go:122` | Nom canonique (v1.0.6). |
| `SMTP_PASSWORD` | (vide) | `mailer/sender.go:123` | — |
| `SMTP_FROM` | `no-reply@veza.local` | `mailer/sender.go:124` | Adresse expéditeur. |
| `SMTP_FROM_NAME` | `Veza (dev)` | `mailer/sender.go:125` | Nom affiché. |
| `EMAIL_TEMPLATE_DIR` | — | email handlers | Dossier MJML/HTML templates. |
`SMTP_USER`, `FROM_EMAIL`, `FROM_NAME` fonctionnent encore mais loggent un warning deprecation — voir [§27](#27-variables-dépréciées--legacy).
## 9. Hyperswitch / paiements
Désactivé par défaut. Si activé, plusieurs clés deviennent obligatoires.
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| **`HYPERSWITCH_ENABLED`** | `false` | `config.go:407` | Master switch. **Fail-closed en prod** (`validation.go:908`) — boot prod refuse de démarrer si activé sans clés live. |
| `HYPERSWITCH_LIVE_MODE` | `false` | `config.go:408` | Clés live vs test. |
| `HYPERSWITCH_URL` | `http://localhost:18081` | `config.go:409` | Endpoint router. |
| **`HYPERSWITCH_API_KEY`** | (vide) | `config.go:410` | Requis si `HYPERSWITCH_ENABLED=true`. |
| **`HYPERSWITCH_WEBHOOK_SECRET`** | (vide) | `config.go:411` | Requis en prod pour vérification signature. |
| `CHECKOUT_SUCCESS_URL` | `/purchases` | `config.go:412` | Redirect après paiement. |
| `HYPERSWITCH_WEBHOOK_LOG_RETENTION_DAYS` | `90` | `config.go:433` | Jours de rétention `hyperswitch_webhook_log`. |
## 10. Stripe Connect / payouts créateurs
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `STRIPE_CONNECT_ENABLED` | `false` | `config.go:415` | Active le flow payout créateur (v0.602). |
| `STRIPE_SECRET_KEY` | (vide) | `config.go:416` | Requis si activé. |
| `STRIPE_CONNECT_WEBHOOK_SECRET` | (vide) | `config.go:417` | Requis si activé. |
| `PLATFORM_FEE_RATE` | `0.10` | `config.go:418` | Commission plateforme (01). |
| `TRANSFER_RETRY_ENABLED` | `true` | `config.go:421` | Worker retry transfers échoués. |
| `TRANSFER_RETRY_MAX` | `3` | `config.go:422` | — |
| `TRANSFER_RETRY_INTERVAL` | `5m` | `config.go:423` | Délai entre retries. |
| `REVERSAL_WORKER_ENABLED` | `true` | `config.go:426` | Worker async reverse-charge (v1.0.7 item B). |
| `REVERSAL_MAX_RETRIES` | `5` | `config.go:427` | — |
| `REVERSAL_CHECK_INTERVAL` | `1m` | `config.go:428` | Poll. |
| `REVERSAL_BACKOFF_BASE` | `1m` | `config.go:429` | Backoff exponentiel base. |
| `REVERSAL_BACKOFF_MAX` | `1h` | `config.go:430` | Plafond backoff. |
| `RECONCILE_WORKER_ENABLED` | `true` | `config.go:436` | Réconciliation orders/refunds stuck (v1.0.7 item C). |
| `RECONCILE_INTERVAL` | `1h` | `config.go:437` | Sweep. |
| `RECONCILE_ORDER_STUCK_AFTER` | `30m` | `config.go:438` | Seuil staleness order. |
| `RECONCILE_REFUND_STUCK_AFTER` | `30m` | `config.go:439` | Seuil staleness refund. |
| `RECONCILE_REFUND_ORPHAN_AFTER` | `5m` | `config.go:440` | Seuil orphan refund. |
## 11. RabbitMQ / event bus
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `RABBITMQ_ENABLE` | `true` | `config.go:397` | Active les events async. |
| **`RABBITMQ_URL`** | `amqp://veza:password@veza.fr:15672/` | `config.go:326` | URL connexion. Requis en prod si activé. |
| `RABBITMQ_MAX_RETRIES` | `10` | `config.go:395` | Retries connexion. |
| `RABBITMQ_RETRY_INTERVAL` | `5s` | `config.go:396` | Délai retries. |
## 12. Stockage S3 / MinIO
Opt-in. Le path upload principal n'utilise pas encore S3 (FUNCTIONAL_AUDIT §4 item 2 — stockage local disque only).
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `AWS_S3_ENABLED` | `false` | `config.go:364` | Master switch. |
feat(infra): MinIO distributed EC:2 + migration script (W3 Day 12) Four-node distributed MinIO cluster, single erasure set EC:2, tolerates 2 simultaneous node losses. 50% storage efficiency. Pinned to RELEASE.2025-09-07T16-13-09Z to match docker-compose so dev/prod parity is preserved. - infra/ansible/roles/minio_distributed/ : install pinned binary, systemd unit pointed at MINIO_VOLUMES with bracket-expansion form, EC:2 forced via MINIO_STORAGE_CLASS_STANDARD. Vault assertion blocks shipping placeholder credentials to staging/prod. - bucket init : creates veza-prod-tracks, enables versioning, applies lifecycle.json (30d noncurrent expiry + 7d abort-multipart). Cold-tier transition ready but inert until minio_remote_tier_name is set. - infra/ansible/playbooks/minio_distributed.yml : provisions the 4 containers, applies common baseline + role. - infra/ansible/inventory/lab.yml : new minio_nodes group. - infra/ansible/tests/test_minio_resilience.sh : kill 2 nodes, verify EC:2 reconstruction (read OK + checksum matches), restart, wait for self-heal. - scripts/minio-migrate-from-single.sh : mc mirror --preserve from the single-node bucket to the new cluster, count-verifies, prints rollout next-steps. - config/prometheus/alert_rules.yml : MinIODriveOffline (warn) + MinIONodesUnreachable (page) — page fires at >= 2 nodes unreachable because that's the redundancy ceiling for EC:2. - docs/ENV_VARIABLES.md §12 : MinIO migration cross-ref. Acceptance (Day 12) : EC:2 survives 2 concurrent kills + self-heals. Lab apply pending. No backend code change — interface stays AWS S3. W3 progress : Redis Sentinel ✓ (Day 11), MinIO distribué ✓ (this), CDN ⏳ Day 13, DMCA ⏳ Day 14, embed ⏳ Day 15. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 11:46:42 +00:00
| `AWS_S3_BUCKET` | (vide) | `config.go:359` | Nom bucket. En prod distribué (v1.0.9 W3 Day 12) : `veza-prod-tracks`. |
| `AWS_REGION` | `us-east-1` | `config.go:360` | Région. |
feat(infra): MinIO distributed EC:2 + migration script (W3 Day 12) Four-node distributed MinIO cluster, single erasure set EC:2, tolerates 2 simultaneous node losses. 50% storage efficiency. Pinned to RELEASE.2025-09-07T16-13-09Z to match docker-compose so dev/prod parity is preserved. - infra/ansible/roles/minio_distributed/ : install pinned binary, systemd unit pointed at MINIO_VOLUMES with bracket-expansion form, EC:2 forced via MINIO_STORAGE_CLASS_STANDARD. Vault assertion blocks shipping placeholder credentials to staging/prod. - bucket init : creates veza-prod-tracks, enables versioning, applies lifecycle.json (30d noncurrent expiry + 7d abort-multipart). Cold-tier transition ready but inert until minio_remote_tier_name is set. - infra/ansible/playbooks/minio_distributed.yml : provisions the 4 containers, applies common baseline + role. - infra/ansible/inventory/lab.yml : new minio_nodes group. - infra/ansible/tests/test_minio_resilience.sh : kill 2 nodes, verify EC:2 reconstruction (read OK + checksum matches), restart, wait for self-heal. - scripts/minio-migrate-from-single.sh : mc mirror --preserve from the single-node bucket to the new cluster, count-verifies, prints rollout next-steps. - config/prometheus/alert_rules.yml : MinIODriveOffline (warn) + MinIONodesUnreachable (page) — page fires at >= 2 nodes unreachable because that's the redundancy ceiling for EC:2. - docs/ENV_VARIABLES.md §12 : MinIO migration cross-ref. Acceptance (Day 12) : EC:2 survives 2 concurrent kills + self-heals. Lab apply pending. No backend code change — interface stays AWS S3. W3 progress : Redis Sentinel ✓ (Day 11), MinIO distribué ✓ (this), CDN ⏳ Day 13, DMCA ⏳ Day 14, embed ⏳ Day 15. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 11:46:42 +00:00
| `AWS_S3_ENDPOINT` | (vide) | `config.go:361` | Endpoint custom (MinIO). En prod distribué : `http://minio-1.lxd:9000` directement, ou via HAProxy (v1.0.9 W4 day 19). |
| `AWS_ACCESS_KEY_ID` | (vide) | `config.go:362` | Optionnel si IAM role EC2. |
| `AWS_SECRET_ACCESS_KEY` | (vide) | `config.go:363` | — |
feat(infra): MinIO distributed EC:2 + migration script (W3 Day 12) Four-node distributed MinIO cluster, single erasure set EC:2, tolerates 2 simultaneous node losses. 50% storage efficiency. Pinned to RELEASE.2025-09-07T16-13-09Z to match docker-compose so dev/prod parity is preserved. - infra/ansible/roles/minio_distributed/ : install pinned binary, systemd unit pointed at MINIO_VOLUMES with bracket-expansion form, EC:2 forced via MINIO_STORAGE_CLASS_STANDARD. Vault assertion blocks shipping placeholder credentials to staging/prod. - bucket init : creates veza-prod-tracks, enables versioning, applies lifecycle.json (30d noncurrent expiry + 7d abort-multipart). Cold-tier transition ready but inert until minio_remote_tier_name is set. - infra/ansible/playbooks/minio_distributed.yml : provisions the 4 containers, applies common baseline + role. - infra/ansible/inventory/lab.yml : new minio_nodes group. - infra/ansible/tests/test_minio_resilience.sh : kill 2 nodes, verify EC:2 reconstruction (read OK + checksum matches), restart, wait for self-heal. - scripts/minio-migrate-from-single.sh : mc mirror --preserve from the single-node bucket to the new cluster, count-verifies, prints rollout next-steps. - config/prometheus/alert_rules.yml : MinIODriveOffline (warn) + MinIONodesUnreachable (page) — page fires at >= 2 nodes unreachable because that's the redundancy ceiling for EC:2. - docs/ENV_VARIABLES.md §12 : MinIO migration cross-ref. Acceptance (Day 12) : EC:2 survives 2 concurrent kills + self-heals. Lab apply pending. No backend code change — interface stays AWS S3. W3 progress : Redis Sentinel ✓ (Day 11), MinIO distribué ✓ (this), CDN ⏳ Day 13, DMCA ⏳ Day 14, embed ⏳ Day 15. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 11:46:42 +00:00
**Migration single-node → distribué (v1.0.9 W3 Day 12)** : `bash scripts/minio-migrate-from-single.sh` mirroir le bucket existant vers le nouveau cluster EC:2 4-nœuds. Voir `infra/ansible/roles/minio_distributed/README.md` pour le déploiement.
feat(cdn): Bunny.net signed URLs + HLS cache headers + metric collision fix (W3 Day 13) CDN edge in front of S3/MinIO via origin-pull. Backend signs URLs with Bunny.net token-auth (SHA-256 over security_key + path + expires) so edges verify before serving cached objects ; origin is never hit on a valid token. Cloudflare CDN / R2 / CloudFront stubs kept. - internal/services/cdn_service.go : new providers CDNProviderBunny + CDNProviderCloudflareR2. SecurityKey added to CDNConfig. generateBunnySignedURL implements the documented Bunny scheme (url-safe base64, no padding, expires query). HLSSegmentCacheHeaders + HLSPlaylistCacheHeaders helpers exported for handlers. - internal/services/cdn_service_test.go : pin Bunny URL shape + base64-url charset ; assert empty SecurityKey fails fast (no silent fallback to unsigned URLs). - internal/core/track/service.go : new CDNURLSigner interface + SetCDNService(cdn). GetStorageURL prefers CDN signed URL when cdnService.IsEnabled, falls back to direct S3 presign on signing error so a CDN partial outage doesn't block playback. - internal/api/routes_tracks.go + routes_core.go : wire SetCDNService on the two TrackService construction sites that serve stream/download. - internal/config/config.go : 4 new env vars (CDN_ENABLED, CDN_PROVIDER, CDN_BASE_URL, CDN_SECURITY_KEY). config.CDNService always non-nil after init ; IsEnabled gates the actual usage. - internal/handlers/hls_handler.go : segments now return Cache-Control: public, max-age=86400, immutable (content-addressed filenames make this safe). Playlists at max-age=60. - veza-backend-api/.env.template : 4 placeholder env vars. - docs/ENV_VARIABLES.md §12 : provider matrix + Bunny vs Cloudflare vs R2 trade-offs. Bug fix collateral : v1.0.9 Day 11 introduced veza_cache_hits_total which collided in name with monitoring.CacheHitsTotal (different label set ⇒ promauto MustRegister panic at process init). Day 13 deletes the monitoring duplicate and restores the metrics-package counter as the single source of truth (label: subsystem). All 8 affected packages green : services, core/track, handlers, middleware, websocket/chat, metrics, monitoring, config. Acceptance (Day 13) : code path is wired ; verifying via real Bunny edge requires a Pull Zone provisioned by the user (EX-? in roadmap). On the user side : create Pull Zone w/ origin = MinIO, copy token auth key into CDN_SECURITY_KEY, set CDN_ENABLED=true. W3 progress : Redis Sentinel ✓ · MinIO distribué ✓ · CDN ✓ · DMCA ⏳ Day 14 · embed ⏳ Day 15. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 12:07:20 +00:00
### CDN edge (v1.0.9 W3 Day 13) — optionnel
Quatre variables, toutes optionnelles. Quand `CDN_ENABLED=false` (défaut), les browsers sont redirigés directement vers MinIO/S3 comme en v1.0.8.
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `CDN_ENABLED` | `false` | `config.go` | Master switch. `false` ⇒ branches CDN no-op. |
| `CDN_PROVIDER` | `none` | `config.go` | `bunny` (recommandé), `cloudflare`, `cloudflare_r2`, `cloudfront`, `none`. |
| `CDN_BASE_URL` | (vide) | `config.go` | Ex. `https://veza.b-cdn.net`. Origin-pull config'd côté provider pour pointer sur MinIO. |
| `CDN_SECURITY_KEY` | (vide) | `config.go` | **Bunny.net uniquement** : Pull Zone Token Authentication Key. Sans, `GenerateSignedURL` échoue ; le code retombe sur la presign S3 directe. |
Quand actif : `TrackService.GetStorageURL` génère un URL CDN signé (Bunny token auth = SHA-256(key + path + expires)) au lieu de la presign MinIO. Origin-pull côté CDN garantit que les fichiers sont servis depuis l'edge — la presign MinIO n'est jamais exposée au navigateur.
**Provider matrix** :
- **Bunny.net** (recommandé v1.0) : signed URLs natifs, $0.005/GB egress audio, latence edge bonne en EU. Token auth supporté par défaut.
- **Cloudflare CDN / R2** : signed URLs nécessitent un Worker — non supporté en v1.0, retourne URL non signé (auth gate côté backend avant le redirect 302).
- **CloudFront** : stub déjà en place, signing complet remis à v1.1.
feat(storage): add track storage_backend column + config prep (v1.0.8 P0) Phase 0 of the MinIO upload migration (FUNCTIONAL_AUDIT §4 item 2). Schema + config only — Phase 1 will wire TrackService.UploadTrack() to actually route writes to S3 when the flag is flipped. Schema (migration 985): - tracks.storage_backend VARCHAR(16) NOT NULL DEFAULT 'local' CHECK in ('local', 's3') - tracks.storage_key VARCHAR(512) NULL (S3 object key when backend=s3) - Partial index on storage_backend = 's3' (migration progress queries) - Rollback drops both columns + index; safe only while all rows are still 'local' (guard query in the rollback comment) Go model (internal/models/track.go): - StorageBackend string (default 'local', not null) - StorageKey *string (nullable) - Both tagged json:"-" — internal plumbing, never exposed publicly Config (internal/config/config.go): - New field Config.TrackStorageBackend - Read from TRACK_STORAGE_BACKEND env var (default 'local') - Production validation rule #11 (ValidateForEnvironment): - Must be 'local' or 's3' (reject typos like 'S3' or 'minio') - If 's3', requires AWS_S3_ENABLED=true (fail fast, do not boot with TrackStorageBackend=s3 while S3StorageService is nil) - Dev/staging warns and falls back to 'local' instead of fail — keeps iteration fast while still flagging misconfig. Docs: - docs/ENV_VARIABLES.md §13 restructured as "HLS + track storage backend" with a migration playbook (local → s3 → migrate-storage CLI) - docs/ENV_VARIABLES.md §28 validation rules: +2 entries for new rules - docs/ENV_VARIABLES.md §29 drift findings: TRACK_STORAGE_BACKEND added to "missing from template" list before it was fixed - veza-backend-api/.env.template: TRACK_STORAGE_BACKEND=local with comment pointing at Phase 1/2/3 plans No behavior change yet — TrackService.UploadTrack() still hardcodes the local path via copyFileAsync(). Phase 1 wires it. Refs: - AUDIT_REPORT.md §9 item (deferrals v1.0.8) - FUNCTIONAL_AUDIT.md §4 item 2 "Stockage local disque only" - /home/senke/.claude/plans/audit-fonctionnel-wild-hickey.md Item 3 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 17:54:28 +00:00
## 13. HLS streaming + track storage backend
### HLS
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `HLS_STREAMING` | `false` | `config.go:355` | Active les endpoints HLS manifest/segments. Off = player fallback sur `/tracks/:id/stream` avec Range. |
| `HLS_STORAGE_DIR` | `/tmp/veza-hls` | `config.go:356` | Stockage segments HLS. |
| `HLS_OUTPUT_DIR` | (contextuel) | `workers/job_worker.go` | Output segments transcodés. |
| `STREAM_HLS_BASE_URL` | (vide) | `live_stream_callback.go` | Base URL publique pour liens m3u8. |
Pattern d'activation : `HLS_STREAMING=true` + stream server up + transcoding pipeline active. Défaut off, fallback direct-stream suffit pour le démarrage.
feat(storage): add track storage_backend column + config prep (v1.0.8 P0) Phase 0 of the MinIO upload migration (FUNCTIONAL_AUDIT §4 item 2). Schema + config only — Phase 1 will wire TrackService.UploadTrack() to actually route writes to S3 when the flag is flipped. Schema (migration 985): - tracks.storage_backend VARCHAR(16) NOT NULL DEFAULT 'local' CHECK in ('local', 's3') - tracks.storage_key VARCHAR(512) NULL (S3 object key when backend=s3) - Partial index on storage_backend = 's3' (migration progress queries) - Rollback drops both columns + index; safe only while all rows are still 'local' (guard query in the rollback comment) Go model (internal/models/track.go): - StorageBackend string (default 'local', not null) - StorageKey *string (nullable) - Both tagged json:"-" — internal plumbing, never exposed publicly Config (internal/config/config.go): - New field Config.TrackStorageBackend - Read from TRACK_STORAGE_BACKEND env var (default 'local') - Production validation rule #11 (ValidateForEnvironment): - Must be 'local' or 's3' (reject typos like 'S3' or 'minio') - If 's3', requires AWS_S3_ENABLED=true (fail fast, do not boot with TrackStorageBackend=s3 while S3StorageService is nil) - Dev/staging warns and falls back to 'local' instead of fail — keeps iteration fast while still flagging misconfig. Docs: - docs/ENV_VARIABLES.md §13 restructured as "HLS + track storage backend" with a migration playbook (local → s3 → migrate-storage CLI) - docs/ENV_VARIABLES.md §28 validation rules: +2 entries for new rules - docs/ENV_VARIABLES.md §29 drift findings: TRACK_STORAGE_BACKEND added to "missing from template" list before it was fixed - veza-backend-api/.env.template: TRACK_STORAGE_BACKEND=local with comment pointing at Phase 1/2/3 plans No behavior change yet — TrackService.UploadTrack() still hardcodes the local path via copyFileAsync(). Phase 1 wires it. Refs: - AUDIT_REPORT.md §9 item (deferrals v1.0.8) - FUNCTIONAL_AUDIT.md §4 item 2 "Stockage local disque only" - /home/senke/.claude/plans/audit-fonctionnel-wild-hickey.md Item 3 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 17:54:28 +00:00
### Track upload storage backend (v1.0.8 Phase 0)
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| **`TRACK_STORAGE_BACKEND`** | `local` | `config.go` (bloc S3) | Où `TrackService.UploadTrack()` écrit les fichiers track. Valeurs : `local` (disque, `uploads/tracks/`) ou `s3` (via `S3StorageService`). **En prod, `s3` nécessite `AWS_S3_ENABLED=true`** — boot échoue sinon (`validation.go` règle 11). Dev/staging warn et fallback sur `local`. |
Pour migrer un environnement :
1. Deploy avec `TRACK_STORAGE_BACKEND=local` (aucun changement).
2. Activer S3 : `AWS_S3_ENABLED=true`, `AWS_S3_BUCKET=<bucket>`, `AWS_S3_ENDPOINT=<minio-url>`, credentials.
3. Flipper : `TRACK_STORAGE_BACKEND=s3`. Nouveaux uploads vont sur MinIO/S3, tracks existants restent `local`.
4. (Optionnel) Lancer `cmd/migrate_storage` (Phase 3) pour déplacer les tracks `local` vers `s3`.
Rollback : revert `TRACK_STORAGE_BACKEND=local`, nouveaux uploads repartent en local. Les tracks déjà migrés restent en `s3` (read path les sert via signed URL en Phase 2).
feat(webrtc): coturn ICE config endpoint + frontend wiring + ops template (v1.0.9 item 1.2) Closes FUNCTIONAL_AUDIT.md §4 #1: WebRTC 1:1 calls had working signaling but no NAT traversal, so calls between two peers behind symmetric NAT (corporate firewalls, mobile carrier CGNAT, Incus container default networking) failed silently after the SDP exchange. Backend: - GET /api/v1/config/webrtc (public) returns {iceServers: [...]} built from WEBRTC_STUN_URLS / WEBRTC_TURN_URLS / *_USERNAME / *_CREDENTIAL env vars. Half-config (URLs without creds, or vice versa) deliberately omits the TURN block — a half-configured TURN surfaces auth errors at call time instead of falling back cleanly to STUN-only. - 4 handler tests cover the matrix. Frontend: - services/api/webrtcConfig.ts caches the config for the page lifetime and falls back to the historical hardcoded Google STUN if the fetch fails. - useWebRTC fetches at mount, hands iceServers synchronously to every RTCPeerConnection, exposes a {hasTurn, loaded} hint. - CallButton tooltip warns up-front when TURN isn't configured instead of letting calls time out silently. Ops: - infra/coturn/turnserver.conf — annotated template with the SSRF- safe denied-peer-ip ranges, prometheus exporter, TLS for TURNS, static lt-cred-mech (REST-secret rotation deferred to v1.1). - infra/coturn/README.md — Incus deploy walkthrough, smoke test via turnutils_uclient, capacity rules of thumb. - docs/ENV_VARIABLES.md gains a 13bis. WebRTC ICE servers section. Coturn deployment itself is a separate ops action — this commit lands the plumbing so the deploy can light up the path with zero code changes. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-26 21:38:42 +00:00
## 13bis. WebRTC ICE servers (v1.0.9 item 1.2)
Public endpoint `GET /api/v1/config/webrtc` returns `{ iceServers }` to
the SPA so 1:1 calls (`features/chat/hooks/useWebRTC.ts`) can hand a
runtime ICE list to `RTCPeerConnection` without baking secrets into the
bundle. See `infra/coturn/README.md` for the relay deploy guide.
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `WEBRTC_STUN_URLS` | `stun:stun.l.google.com:19302` | `config.go` | Liste comma-separated. STUN seul suffit hors NAT symétrique. |
| `WEBRTC_TURN_URLS` | (vide) | `config.go` | Liste comma-separated, e.g. `turn:turn.veza.fr:3478,turns:turn.veza.fr:5349`. Vide = pas de relais TURN exposé, le SPA bascule sur l'advisory "TURN non configuré". |
| `WEBRTC_TURN_USERNAME` | (vide) | `config.go` | Identifiant statique `lt-cred-mech` côté coturn. **Doit matcher `user=` dans `infra/coturn/turnserver.conf`**. |
| `WEBRTC_TURN_CREDENTIAL` | (vide) | `config.go` | Mot de passe statique. Rotation = update conf coturn + ces deux env vars + reload coturn. |
**Half-config = aucune TURN diffusée.** Le handler omet le bloc TURN si
n'importe lequel de URLs/Username/Credential est vide — un demi-config
est pire que rien (le navigateur surfacerait une erreur d'auth au
moment du call au lieu du fallback STUN propre).
## 14. Stream server (backend ↔ stream)
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `STREAM_SERVER_URL` | `http://veza.fr:8082` | `config.go:344` | URL stream server côté backend. |
| **`STREAM_SERVER_INTERNAL_API_KEY`** | (vide) | `config.go:345` | Secret partagé `/internal/jobs/transcode`. Doit matcher le `INTERNAL_API_KEY` du stream server. |
| `CHAT_SERVER_URL` | `http://veza.fr:8081` | `config.go:346` | Historique — le chat est dans le backend Go depuis v0.502. |
## 15. Elasticsearch
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `ELASTICSEARCH_URL` | (vide) | `elasticsearch/config.go:14` | Vide = search désactivée (fallback Postgres). |
| `ELASTICSEARCH_INDEX` | `veza-platform` | `elasticsearch/config.go:15` | Préfixe indices. |
| `ELASTICSEARCH_AUTO_INDEX` | `false` | search service | Auto-create index au boot. |
Admin : `POST /api/v1/admin/search/reindex` pour réindexation manuelle.
## 16. ClamAV / antivirus
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `ENABLE_CLAMAV` | `true` | `api/router.go` | Active le scan AV sur upload. |
| **`CLAMAV_REQUIRED`** | `true` | `validation.go:886` | **Boot prod échoue si ClamAV indispo**. Dev seul peut mettre `false`. |
| `CLAMAV_ADDRESS` | — | upload service | `host:port` clamd. |
| `CLAMAV_CLAMD_PATH` | — | upload service | Alternative socket local. |
## 17. Sentry / tracking d'erreurs
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `SENTRY_DSN` | (vide) | `config.go:367` | Vide = Sentry désactivé. |
| `SENTRY_ENVIRONMENT` | dérivé de `APP_ENV` | `config.go:368` | Label UI Sentry. |
| `SENTRY_SAMPLE_RATE_ERRORS` | `1.0` | `config.go:369` | 01. |
| `SENTRY_SAMPLE_RATE_TRANSACTIONS` | `0.1` | `config.go:370` | 01. |
## 18. Logging
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `LOG_DIR` | `/var/log/veza` | `config.go:444` | Dossier fichiers log. |
| `LOG_FORMAT` | env-aware | logging | `json` (prod) / `console` (dev). |
| `LOG_AGGREGATION_ENABLED` | dérivé | `config.go:464` | True auto si endpoint défini. |
| `LOG_AGGREGATION_ENDPOINT` | (vide) | `config.go:386` | Loki ou compatible. |
| `LOG_AGGREGATION_BATCH_SIZE` | `100` | `config.go:387` | Lignes par batch. |
| `LOG_AGGREGATION_FLUSH_INTERVAL` | `5s` | `config.go:388` | Auto-flush. |
| `LOG_AGGREGATION_TIMEOUT` | `10s` | `config.go:389` | Timeout HTTP. |
| `LOG_AGGREGATION_LABELS` | `service=veza-api` | `config.go:390` | CSV `key=value`. |
## 19. Monitoring / metrics
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `PROMETHEUS_URL` | (vide) | `config.go` | Push gateway optionnel. |
| `SLOW_REQUEST_THRESHOLD_MS` | `1000` | `middleware/request_logger.go` | Log + counter pour requêtes plus lentes. |
| `METRICS_ALLOWED_IPS` | — | metrics middleware | CIDR whitelist `/metrics`. |
| `METRICS_BEARER_TOKEN` | — | metrics middleware | Auth bearer optionnelle. |
| `METRICS_PUBLIC_IN_DEV` | `false` | metrics middleware | Skip auth en dev. |
En prod, si ni `METRICS_ALLOWED_IPS` ni `METRICS_BEARER_TOKEN` n'est défini, `/metrics` retourne 403.
## 20. Frontend (Vite)
Tout lu dans `apps/web/src/lib/env.ts` (+ `apps/web/src/lib/featureFlags.ts`). Préfixe `VITE_` obligatoire — les autres vars ne sont pas exposées au navigateur.
**URLs de base** — dérivées de `VITE_DOMAIN` si omises :
| Variable | Défaut | Rôle |
| --- | --- | --- |
| `VITE_DOMAIN` | `veza.fr` | Domaine racine pour dérivations. |
| `VITE_API_URL` | `/api/v1` | API backend (relatif ou absolu). |
| `VITE_WS_URL` | dérivé | URL WebSocket. |
| `VITE_STREAM_URL` | `ws://veza.fr:8082/stream` | WS stream server. |
| `VITE_HLS_BASE_URL` | dérivé | Base m3u8. |
| `VITE_UPLOAD_URL` | `/upload` | Endpoint upload. |
| `VITE_BACKEND_PORT` | `18080` | Target proxy Vite dev. |
| `VITE_APP_NAME` | `Veza` | Nom affiché. |
| `VITE_API_VERSION` | `v1` | Suffix version API. |
**Clés tierces** :
| Variable | Rôle |
| --- | --- |
| `VITE_HYPERSWITCH_PUBLISHABLE_KEY` | Widget checkout Hyperswitch. |
| `VITE_FCM_VAPID_KEY` | FCM push notifications. |
| `VITE_SENTRY_DSN` | Sentry frontend. |
2026-03-05 18:27:34 +00:00
**Toggles runtime** :
2026-03-05 18:27:34 +00:00
| Variable | Défaut | Rôle |
| --- | --- | --- |
| `VITE_DEBUG` | `false` | Log API verbeux (`true` / `1`). |
| `VITE_USE_MSW` | `0` | Mock Service Worker. |
| `VITE_LOG_LEVEL` | (utils) | Niveau log frontend. |
| `VITE_LOG_ENDPOINT` | (vide) | Logging distant. |
| `VITE_ENABLE_VALIDATION_ALERTING` | — | Alertes validation UI. |
| `VITE_GITHUB_REPO` | — | Lien feedback. |
| `VITE_STORYBOOK` | — | Mode Storybook. |
2026-03-05 18:27:34 +00:00
## 21. Feature flags
2026-03-05 18:27:34 +00:00
Backend (`config.go`) :
2026-03-05 18:27:34 +00:00
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `MAINTENANCE_MODE` | `false` | `middleware/maintenance.go` | 503 pour routes non-admin. TTL-cache depuis `platform_settings`. |
| `BYPASS_CONTENT_CREATOR_ROLE` | `false` | `validation.go:1018` | Dev/test only — validation prod échoue si true. |
| `CSRF_DISABLED` | `false` | CSRF middleware | Dev only. |
| `HARD_DELETE_CRON_ENABLED` | `false` | cleanup jobs | Sweep nightly hard-delete. |
2026-03-05 18:27:34 +00:00
Frontend (`apps/web/src/lib/featureFlags.ts`) :
2026-03-05 18:27:34 +00:00
| Variable | Défaut | Feature |
| --- | --- | --- |
| `VITE_FEATURE_TWO_FACTOR_AUTH` | `true` | 2FA. |
| `VITE_FEATURE_PLAYLIST_COLLABORATION` | `true` | Playlists partagées. |
| `VITE_FEATURE_PLAYLIST_SEARCH` | `false` | Search intra-playlist. |
| `VITE_FEATURE_PLAYLIST_SHARE` | `false` | Liens token partageables. |
| `VITE_FEATURE_PLAYLIST_RECOMMENDATIONS` | `false` | — |
| `VITE_FEATURE_HLS_STREAMING` | `true` | HLS playback player (gate frontend). |
| `VITE_FEATURE_ROLE_MANAGEMENT` | `false` | Matrice rôles admin. |
| `VITE_FEATURE_NOTIFICATIONS` | `false` | Push notifications. |
2026-03-05 18:22:31 +00:00
## 22. Politique de mot de passe
2026-03-05 18:22:31 +00:00
| Variable | Défaut | Lu à |
| --- | --- | --- |
| `PASSWORD_MIN_LENGTH` | (built-in) | `password_validator.go` |
| `PASSWORD_MAX_LENGTH` | (built-in) | `password_validator.go` |
| `PASSWORD_REQUIRE_UPPER` | (built-in) | `password_validator.go` |
| `PASSWORD_REQUIRE_LOWER` | (built-in) | `password_validator.go` |
| `PASSWORD_REQUIRE_NUMBER` | (built-in) | `password_validator.go` |
| `PASSWORD_REQUIRE_SPECIAL` | (built-in) | `password_validator.go` |
2026-03-05 18:22:31 +00:00
## 23. Build / version info
2026-03-05 18:22:31 +00:00
Injecté au build via `-ldflags` :
2026-03-05 18:22:31 +00:00
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| `APP_VERSION` | `v1.0.0` | `routes_core.go` | Version sémantique. |
| `GIT_COMMIT` | `unknown` | `routes_core.go` | SHA Git. |
| `BUILD_TIME` | (vide) | `routes_core.go` | Timestamp RFC3339. |
2026-03-05 18:22:31 +00:00
## 24. RTMP / Web Push / divers
2026-03-05 18:22:31 +00:00
| Variable | Rôle |
| --- | --- |
| `VAPID_PRIVATE_KEY` / `VAPID_PUBLIC_KEY` | Web Push (requis si push activé). |
| `RTMP_CALLBACK_SECRET` | Signature callback `on_publish` Nginx-RTMP. |
| `NGINX_RTMP_HOST` | Hôte affiché aux streamers (`rtmp_url`). Défaut `stream.veza.app`. |
| `NGINX_RTMP_ADDR` | Endpoint interne RTMP. |
| `GEOIP_DB_PATH` | Chemin DB GeoIP2 (optionnel — géolocalisation IP). |
| `HANDLER_TIMEOUT` | Timeout handler global (défaut `30s`, `config.go:377`). |
| `MAX_CONCURRENT_UPLOADS` | Backpressure upload (défaut `10`, `config.go:312`). |
| `MAX_JSON_BODY_SIZE` | Plafond body JSON (défaut 1MB). |
| `CONFIG_WATCH` | Reload sur changement `.env` (T0040, défaut `false`, `config.go:765`). |
| `FRONTEND_URL` | URL frontend côté backend (OAuth, emails). |
| `BASE_URL` | URL publique backend (callbacks OAuth). |
| `PGPASSWORD` | Pour CLI `psql` — pas le backend. |
| `VEZA_ROOT` / `VEZA_SKIP_INTEGRATION` | Helpers test-suite. |
| `E2E_TEST` / `TEST_EMAIL` / `TEST_USERNAME` / `TEST_PASSWORD` | Setup E2E. |
2026-03-05 18:22:31 +00:00
**Live streaming** (v0.10.6 F471) : profiles Docker `live`. Ports 1935 (RTMP), 18083 (HTTP HLS). Callbacks vers `POST /api/v1/live/callback/publish` et `.../publish_done`.
2026-03-05 18:22:31 +00:00
## 25. Stream server (Rust) — schéma propre
2026-03-05 18:22:31 +00:00
Le stream server Rust a ~50 variables propres couvrant compression audio, TLS, workers Tokio, caching, tracing. Fichier principal : `veza-stream-server/src/config/mod.rs`.
2026-03-05 18:35:57 +00:00
**Connexion** : `DATABASE_URL`, `REDIS_URL`, `RABBITMQ_URL`, `BACKEND_URL`, `PORT`, `SECRET_KEY`, `INTERNAL_API_KEY`, `ALLOWED_ORIGINS`, `JWT_PUBLIC_KEY_PATH`, `JWT_SECRET`, `JWT_EXPIRATION`.
2026-03-05 18:35:57 +00:00
**Stockage / audio** : `AUDIO_DIR`, `STREAM_OUTPUT_DIR`, `COMPRESSION_ENABLED`, `COMPRESSION_LEVEL`, `COMPRESSION_MAX_CONCURRENT_JOBS`, `COMPRESSION_OUTPUT_DIR`, `COMPRESSION_TEMP_DIR`, `COMPRESSION_CLEANUP_AFTER_DAYS`, `FFMPEG_PATH`.
2026-03-05 18:35:57 +00:00
**Pool DB** : `DB_POOL_SIZE`, `DB_CONNECTION_TIMEOUT`, `DB_IDLE_TIMEOUT`, `DB_MAX_LIFETIME`, `DB_MIN_CONNECTIONS`, `DB_MIGRATE_ON_START`, `DB_ENABLE_LOGGING`.
2026-03-05 18:35:57 +00:00
**Networking** : `TLS_CERT_PATH`, `TLS_KEY_PATH`, `TCP_NODELAY`, `TCP_KEEPALIVE`, `MAX_FILE_SIZE`, `MAX_RANGE_SIZE`, `MAX_CONCURRENT_STREAMS`, `STREAM_TIMEOUT`, `CORS_MAX_AGE`.
**Runtime Tokio** : `THREAD_STACK_SIZE`, `MAX_BLOCKING_THREADS`, `WORKER_THREADS`.
**Cache** : `CACHE_TTL_SECONDS`, `CACHE_MAX_SIZE_MB`, `CACHE_COMPRESSION`, `CACHE_CLEANUP_INTERVAL`.
**Notifications** : `NOTIFICATIONS_ENABLED`, `NOTIFICATIONS_BATCH_SIZE`, `NOTIFICATIONS_MAX_QUEUE_SIZE`, `NOTIFICATIONS_DELIVERY_WORKERS`, `NOTIFICATIONS_RETRY_ATTEMPTS`, `NOTIFICATIONS_RETRY_DELAY`, `ALERT_WEBHOOKS`.
2026-03-05 18:35:57 +00:00
**Sécurité** : `SECURE_HEADERS`, `CSRF_PROTECTION`, `SIGNATURE_TOLERANCE`.
**Observability** : `METRICS_ENABLED`, `METRICS_PORT`, `PROMETHEUS_NAMESPACE`, `JAEGER_ENDPOINT`, `HEALTH_CHECK_INTERVAL`, `LOG_DIR`, `LOG_LEVEL`, `LOG_FORMAT`.
Documentation complète beside le stream server (`veza-stream-server/.env.example`) — délibérément séparé de ce catalogue (audience différente : opérateur stream vs. dev backend).
## 26. Security headers (rappel)
Appliqués globalement via `middleware.SecurityHeaders()` (`veza-backend-api/internal/middleware/security_headers.go`). Non configurables par env var — config dans le code.
| Header | Valeur | Rôle |
| --- | --- | --- |
| `Strict-Transport-Security` | `max-age=31536000; includeSubDomains; preload` (prod only) | HSTS |
| `X-Content-Type-Options` | `nosniff` | Anti MIME sniffing |
| `X-Frame-Options` | `DENY` (`SAMEORIGIN` pour Swagger) | Anti clickjacking |
| `X-XSS-Protection` | `1; mode=block` | XSS legacy |
| `Referrer-Policy` | `strict-origin-when-cross-origin` | Referrer |
| `Permissions-Policy` | `geolocation=(), microphone=(), camera=()...` | Restriction features |
| `Content-Security-Policy` | `default-src 'none'; ...` | CSP strict hors Swagger |
| `X-Permitted-Cross-Domain-Policies` | `none` | Flash/PDF |
| `Cross-Origin-Embedder-Policy` | `require-corp` | COEP |
| `Cross-Origin-Opener-Policy` | `same-origin` | COOP |
| `Cross-Origin-Resource-Policy` | `same-origin` | CORP |
## 27. Variables dépréciées / legacy
Fonctionnent encore avec un warning deprecation. **Supprimées en v1.1.0** — migrer maintenant.
| Déprécié | Canonique | Notes |
| --- | --- | --- |
| `REDIS_ADDR` | `REDIS_URL` | URL supersede host/port/db split. |
| `REDIS_PASSWORD` | `REDIS_URL` | Embedded dans URL. |
| `REDIS_DB` | `REDIS_URL` | Path segment `/0`. |
| `SMTP_USER` | `SMTP_USERNAME` | Renommé v1.0.6. |
| `FROM_EMAIL` | `SMTP_FROM` | Unifié v1.0.6. |
| `FROM_NAME` | `SMTP_FROM_NAME` | Unifié v1.0.6. |
| `GO_ENV`, `NODE_ENV` | `APP_ENV` | — |
| `VITE_FRONTEND_URL` | `FRONTEND_URL` | Frontend lit depuis backend. |
| `JWT_ACCESS_TOKEN_DURATION` | (hardcoded) | Hardcodé dans le service JWT. |
| `JWT_REFRESH_TOKEN_DURATION` | (hardcoded) | — |
| `RATE_LIMIT_ENABLED` | (implicite) | Toujours on si Redis dispo. |
| `DATABASE_MAX_OPEN_CONNS` | `DB_MAX_OPEN_CONNS` | Bug template — renommé v1.1.0. |
| `DATABASE_MAX_IDLE_CONNS` | `DB_MAX_IDLE_CONNS` | — |
| `DATABASE_CONN_MAX_LIFETIME` | `DB_MAX_LIFETIME` | — |
## 28. Règles de validation production
`Config.Validate()` dans `veza-backend-api/internal/config/validation.go` applique :
| Règle | Fichier:ligne |
| --- | --- |
| `LOG_LEVEL=DEBUG` refusé en prod | `validation.go:876` |
| `CORS_ALLOWED_ORIGINS` sans wildcards en prod | `validation.go:869` |
| `CHAT_JWT_SECRET` doit différer de `JWT_SECRET` en prod | `validation.go:891` |
| `CLAMAV_REQUIRED=false` refusé en prod | `validation.go:886` |
| `OAUTH_ENCRYPTION_KEY` ≥32 bytes requis en prod | `validation.go:896` |
| `HYPERSWITCH_ENABLED=true` requiert `HYPERSWITCH_API_KEY` + webhook secret | `validation.go:908` |
| `REDIS_URL` doit être explicite en prod (pas de fallback default) | `validation.go:916` |
feat(storage): add track storage_backend column + config prep (v1.0.8 P0) Phase 0 of the MinIO upload migration (FUNCTIONAL_AUDIT §4 item 2). Schema + config only — Phase 1 will wire TrackService.UploadTrack() to actually route writes to S3 when the flag is flipped. Schema (migration 985): - tracks.storage_backend VARCHAR(16) NOT NULL DEFAULT 'local' CHECK in ('local', 's3') - tracks.storage_key VARCHAR(512) NULL (S3 object key when backend=s3) - Partial index on storage_backend = 's3' (migration progress queries) - Rollback drops both columns + index; safe only while all rows are still 'local' (guard query in the rollback comment) Go model (internal/models/track.go): - StorageBackend string (default 'local', not null) - StorageKey *string (nullable) - Both tagged json:"-" — internal plumbing, never exposed publicly Config (internal/config/config.go): - New field Config.TrackStorageBackend - Read from TRACK_STORAGE_BACKEND env var (default 'local') - Production validation rule #11 (ValidateForEnvironment): - Must be 'local' or 's3' (reject typos like 'S3' or 'minio') - If 's3', requires AWS_S3_ENABLED=true (fail fast, do not boot with TrackStorageBackend=s3 while S3StorageService is nil) - Dev/staging warns and falls back to 'local' instead of fail — keeps iteration fast while still flagging misconfig. Docs: - docs/ENV_VARIABLES.md §13 restructured as "HLS + track storage backend" with a migration playbook (local → s3 → migrate-storage CLI) - docs/ENV_VARIABLES.md §28 validation rules: +2 entries for new rules - docs/ENV_VARIABLES.md §29 drift findings: TRACK_STORAGE_BACKEND added to "missing from template" list before it was fixed - veza-backend-api/.env.template: TRACK_STORAGE_BACKEND=local with comment pointing at Phase 1/2/3 plans No behavior change yet — TrackService.UploadTrack() still hardcodes the local path via copyFileAsync(). Phase 1 wires it. Refs: - AUDIT_REPORT.md §9 item (deferrals v1.0.8) - FUNCTIONAL_AUDIT.md §4 item 2 "Stockage local disque only" - /home/senke/.claude/plans/audit-fonctionnel-wild-hickey.md Item 3 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 17:54:28 +00:00
| `TRACK_STORAGE_BACKEND` must be `local` ou `s3` (règle 11, v1.0.8) | `config.go` `ValidateForEnvironment` |
| `TRACK_STORAGE_BACKEND=s3` requiert `AWS_S3_ENABLED=true` en prod | `config.go` `ValidateForEnvironment` |
| `BYPASS_CONTENT_CREATOR_ROLE=true` refusé en prod | `validation.go:1018` |
Si une règle est violée, boot échoue avec message humain pointant la variable fautive.
Validation manuelle : `./scripts/validate-env.sh development` (ou `production`, `test`). Intégré dans `make doctor`.
## 29. Drift template ↔ code
Survey 2026-04-23 a identifié des incohérences entre `.env.template` et le code. Non-bloquants, cleanup v1.1.0.
**Manquant dans template** (le code les lit, le dev peut s'en sortir sans les définir) :
feat(storage): add track storage_backend column + config prep (v1.0.8 P0) Phase 0 of the MinIO upload migration (FUNCTIONAL_AUDIT §4 item 2). Schema + config only — Phase 1 will wire TrackService.UploadTrack() to actually route writes to S3 when the flag is flipped. Schema (migration 985): - tracks.storage_backend VARCHAR(16) NOT NULL DEFAULT 'local' CHECK in ('local', 's3') - tracks.storage_key VARCHAR(512) NULL (S3 object key when backend=s3) - Partial index on storage_backend = 's3' (migration progress queries) - Rollback drops both columns + index; safe only while all rows are still 'local' (guard query in the rollback comment) Go model (internal/models/track.go): - StorageBackend string (default 'local', not null) - StorageKey *string (nullable) - Both tagged json:"-" — internal plumbing, never exposed publicly Config (internal/config/config.go): - New field Config.TrackStorageBackend - Read from TRACK_STORAGE_BACKEND env var (default 'local') - Production validation rule #11 (ValidateForEnvironment): - Must be 'local' or 's3' (reject typos like 'S3' or 'minio') - If 's3', requires AWS_S3_ENABLED=true (fail fast, do not boot with TrackStorageBackend=s3 while S3StorageService is nil) - Dev/staging warns and falls back to 'local' instead of fail — keeps iteration fast while still flagging misconfig. Docs: - docs/ENV_VARIABLES.md §13 restructured as "HLS + track storage backend" with a migration playbook (local → s3 → migrate-storage CLI) - docs/ENV_VARIABLES.md §28 validation rules: +2 entries for new rules - docs/ENV_VARIABLES.md §29 drift findings: TRACK_STORAGE_BACKEND added to "missing from template" list before it was fixed - veza-backend-api/.env.template: TRACK_STORAGE_BACKEND=local with comment pointing at Phase 1/2/3 plans No behavior change yet — TrackService.UploadTrack() still hardcodes the local path via copyFileAsync(). Phase 1 wires it. Refs: - AUDIT_REPORT.md §9 item (deferrals v1.0.8) - FUNCTIONAL_AUDIT.md §4 item 2 "Stockage local disque only" - /home/senke/.claude/plans/audit-fonctionnel-wild-hickey.md Item 3 Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 17:54:28 +00:00
`HLS_STREAMING` (ajouté 2026-04-23), `TRACK_STORAGE_BACKEND` (ajouté 2026-04-23), `SLOW_REQUEST_THRESHOLD_MS`, `CONFIG_WATCH`, `HANDLER_TIMEOUT`, `MAX_JSON_BODY_SIZE`, `GEOIP_DB_PATH`, `VAPID_PRIVATE_KEY` / `VAPID_PUBLIC_KEY`, `RTMP_CALLBACK_SECRET`, `NGINX_RTMP_*`, `HARD_DELETE_CRON_ENABLED`, `ELASTICSEARCH_AUTO_INDEX`.
**Dans template mais non lus** : `REDIS_ADDR`, `REDIS_PASSWORD`, `REDIS_DB`, `JWT_ACCESS_TOKEN_DURATION`, `JWT_REFRESH_TOKEN_DURATION`, `RATE_LIMIT_ENABLED`, `DATABASE_MAX_OPEN_CONNS` / `DATABASE_MAX_IDLE_CONNS` / `DATABASE_CONN_MAX_LIFETIME` (mauvais noms).
**Incohérence de nommage** : `SMTP_USERNAME` canonique vs `SMTP_USER` legacy ; `DB_MAX_*` code vs `DATABASE_MAX_*` template.
feat(observability): OTel SDK + collector + Tempo + 4 hot path spans (W2 Day 9) Wires distributed tracing end-to-end. Backend exports OTLP/gRPC to a collector, which tail-samples (errors + slow always, 10% rest) and ships to Tempo. Grafana service-map dashboard pivots on the 4 instrumented hot paths. - internal/tracing/otlp_exporter.go : InitOTLPTracer + Provider.Shutdown, BatchSpanProcessor (5s/512 batch), ParentBased(TraceIDRatio) sampler, W3C trace-context + baggage propagators. OTEL_SDK_DISABLED=true short-circuits to a no-op. Failure to dial collector is non-fatal. - cmd/api/main.go : init at boot, defer Shutdown(5s) on exit. appVersion ldflag-overridable for resource attributes. - 4 hot paths instrumented : * handlers/auth.go::Login → "auth.login" * core/track/track_upload_handler.go::InitiateChunkedUpload → "track.upload.initiate" * core/marketplace/service.go::ProcessPaymentWebhook → "payment.webhook" * handlers/search_handlers.go::Search → "search.query" PII guarded — email masked, query content not recorded (length only). - infra/ansible/roles/otel_collector : pin v0.116.1 contrib build, systemd unit, tail-sampling config (errors + > 500ms always kept). - infra/ansible/roles/tempo : pin v2.7.1 monolithic, local-disk backend (S3 deferred to v1.1), 14d retention. - infra/ansible/playbooks/observability.yml : provisions both Incus containers + applies common baseline + roles in order. - inventory/lab.yml : new groups observability, otel_collectors, tempo. - config/grafana/dashboards/service-map.json : node graph + 4 hot-path span tables + collector throughput/queue panels. - docs/ENV_VARIABLES.md §30 : 4 OTEL_* env vars documented. Acceptance criterion (Day 9) : login → span visible in Tempo UI. Lab deployment to validate with `ansible-playbook -i inventory/lab.yml playbooks/observability.yml` once roles/postgres_ha is up. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 23:15:11 +00:00
## 30. OpenTelemetry / distributed tracing (v1.0.9 Day 9)
Quatre variables consommées par `veza-backend-api/internal/tracing/otlp_exporter.go` au boot. Toutes optionnelles — non set = comportement par défaut documenté.
| Variable | Défaut | Effet |
| --- | --- | --- |
| `OTEL_EXPORTER_OTLP_ENDPOINT` | `localhost:4317` | gRPC endpoint de l'otel-collector. En prod : `otel-collector.lxd:4317`. |
| `OTEL_SDK_DISABLED` | `false` | `true` ou `1` → no-op tracer (zero spans émis). Utile en tests unitaires + dev local sans collector. |
| `OTEL_TRACES_SAMPLER_ARG` | `1.0` | Fraction de traces root samplées côté SDK (0..1). Prod recommandé `1.0` puisque le collector applique son propre tail-sampling derrière. |
| `OTEL_DEPLOYMENT_ENV` | (none) | Override de `cfg.Env` pour le `deployment.environment` resource attribute. Rarement utile. |
Le binaire **ne crashe pas** si le collector est down : l'exporter bufferise puis retry. Spans sont droppés au-delà de 2048 en buffer.
Hot paths instrumentés (v1.0.9) : `auth.login`, `track.upload.initiate`, `payment.webhook`, `search.query`. Voir `infra/ansible/roles/{otel_collector,tempo}/README.md` pour le déploiement de la pipeline.
## 31. Checklist de démarrage
1. Copier `veza-backend-api/.env.template` vers `veza-backend-api/.env` et configurer.
2. Pour RS256 prod : exécuter `scripts/generate-jwt-keys.sh` et configurer `JWT_PRIVATE_KEY_PATH`, `JWT_PUBLIC_KEY_PATH`. Sinon `JWT_SECRET` ≥32 chars.
3. Configurer les **5 minimums** : `DATABASE_URL`, `REDIS_URL`, `JWT_SECRET` (ou keys RS256), `CORS_ALLOWED_ORIGINS`, `FRONTEND_URL`.
4. Si paiements : `HYPERSWITCH_ENABLED=true` + `HYPERSWITCH_API_KEY` + `HYPERSWITCH_WEBHOOK_SECRET`.
5. Valider : `./scripts/validate-env.sh development` (ou `production`).
6. Démarrer : `make dev`.
7. En prod : ne jamais committer `.env` ni fichiers `.pem`.
---
2026-03-05 18:22:31 +00:00
*Dernière sync 2026-04-23 — `v1.0.7-rc1` HEAD `778c8550`. Quand tu ajoutes une variable dans le code, mets à jour ce fichier + `.env.template` dans le même commit. Cross-référence `AUDIT_REPORT.md` §9 item #15.*