Compare commits

...

3 commits

Author SHA1 Message Date
senke
d03232c85c feat(storage): add track storage_backend column + config prep (v1.0.8 P0)
Some checks failed
Veza CI / Backend (Go) (push) Failing after 0s
Veza CI / Frontend (Web) (push) Failing after 0s
Veza CI / Rust (Stream Server) (push) Failing after 0s
Security Scan / Secret Scanning (gitleaks) (push) Failing after 0s
Veza CI / Notify on failure (push) Failing after 0s
Phase 0 of the MinIO upload migration (FUNCTIONAL_AUDIT §4 item 2).
Schema + config only — Phase 1 will wire TrackService.UploadTrack()
to actually route writes to S3 when the flag is flipped.

Schema (migration 985):
- tracks.storage_backend VARCHAR(16) NOT NULL DEFAULT 'local'
  CHECK in ('local', 's3')
- tracks.storage_key VARCHAR(512) NULL (S3 object key when backend=s3)
- Partial index on storage_backend = 's3' (migration progress queries)
- Rollback drops both columns + index; safe only while all rows are
  still 'local' (guard query in the rollback comment)

Go model (internal/models/track.go):
- StorageBackend string (default 'local', not null)
- StorageKey *string (nullable)
- Both tagged json:"-" — internal plumbing, never exposed publicly

Config (internal/config/config.go):
- New field Config.TrackStorageBackend
- Read from TRACK_STORAGE_BACKEND env var (default 'local')
- Production validation rule #11 (ValidateForEnvironment):
  - Must be 'local' or 's3' (reject typos like 'S3' or 'minio')
  - If 's3', requires AWS_S3_ENABLED=true (fail fast, do not boot with
    TrackStorageBackend=s3 while S3StorageService is nil)
- Dev/staging warns and falls back to 'local' instead of fail — keeps
  iteration fast while still flagging misconfig.

Docs:
- docs/ENV_VARIABLES.md §13 restructured as "HLS + track storage backend"
  with a migration playbook (local → s3 → migrate-storage CLI)
- docs/ENV_VARIABLES.md §28 validation rules: +2 entries for new rules
- docs/ENV_VARIABLES.md §29 drift findings: TRACK_STORAGE_BACKEND added
  to "missing from template" list before it was fixed
- veza-backend-api/.env.template: TRACK_STORAGE_BACKEND=local with
  comment pointing at Phase 1/2/3 plans

No behavior change yet — TrackService.UploadTrack() still hardcodes the
local path via copyFileAsync(). Phase 1 wires it.

Refs:
- AUDIT_REPORT.md §9 item (deferrals v1.0.8)
- FUNCTIONAL_AUDIT.md §4 item 2 "Stockage local disque only"
- /home/senke/.claude/plans/audit-fonctionnel-wild-hickey.md Item 3

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 19:54:28 +02:00
senke
4a6a6293e3 fix(e2e): hard-fail global-setup when rate limiting detected
Previously the rate-limit probe emitted a warning box when it
detected active rate limiting (implying the backend was started
without DISABLE_RATE_LIMIT_FOR_TESTS=true) but let the test run
proceed. The flaky 401s on 02-navigation.spec.ts:77 (and sibling
specs using loginViaAPI in beforeEach) all trace to this silent
failure mode — seed users get progressively locked out as each
spec fires rapid login attempts against the real rate limiter.

Replace console.error(box) with throw new Error(), pointing the
developer at `make dev-e2e`. Preserves fast-iteration when the
setup is correct — only blocks misconfigured runs.

Root cause trace:
- tests/e2e/playwright.config.ts:139 uses reuseExistingServer=true,
  so env vars declared in webServer.env (DISABLE_RATE_LIMIT_FOR_TESTS,
  APP_ENV=test, RATE_LIMIT_LIMIT=10000, ACCOUNT_LOCKOUT_EXEMPT_EMAILS)
  are IGNORED if a non-test-mode backend already owns port 18080.
- Previous global-setup warn path emitted a console box but kept
  running — lockout appeared later, looking like a random flake.

Refactored the try/catch: probe stays wrapped (API-down still OK),
got429 sentinel lifted outside so the throw isn't swallowed.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 19:15:39 +02:00
senke
47afb055a2 chore(docs): archive obsolete v0.12.6 security docs
Move ASVS_CHECKLIST_v0.12.6.md, PENTEST_REPORT_VEZA_v0.12.6.md, and
REMEDIATION_MATRIX_v0.12.6.md to docs/archive/ — all reference a
pentest conducted on v0.12.6 (2026-03), stale relative to the current
v1.0.7 codebase (different security middleware, different payment
flow, different config validation).

Update CLAUDE.md tree listing and AUDIT_REPORT.md §9.1 to reflect the
archive location. Keep docs/SECURITY_SCAN_RC1.md (still current).

Closes AUDIT_REPORT §9.1 obsolete-doc item.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 15:32:25 +02:00
12 changed files with 158 additions and 24 deletions

View file

@ -580,7 +580,7 @@ Classement pour la suite (post-v1.0.7-rc1 → v1.0.7 final → v1.0.8).
- `internal/repository/` (orphelin)
- `proto/chat/chat.proto` + types `veza-common/src/chat.rs`
- `apps/web/src/components/ui/{hover-card,dropdown-menu,optimized-image}/` orphelins
- `docs/ASVS_CHECKLIST_v0.12.6.md` + `docs/PENTEST_REPORT_VEZA_v0.12.6.md` (v0.12.6 obsolète)
- ~~`docs/ASVS_CHECKLIST_v0.12.6.md` + `docs/PENTEST_REPORT_VEZA_v0.12.6.md` + `docs/REMEDIATION_MATRIX_v0.12.6.md`~~ ✅ archivés dans `docs/archive/` (2026-04-23)
### 9.2 "À finir avant de commencer quoi que ce soit de nouveau"

View file

@ -105,9 +105,8 @@ veza/
│ ├── PRODUCTION_DEPLOYMENT.md
│ ├── STAGING_DEPLOYMENT.md
│ ├── SECURITY_SCAN_RC1.md
│ ├── ASVS_CHECKLIST_v0.12.6.md
│ ├── PENTEST_REPORT_VEZA_v0.12.6.md
│ └── archive/ # Retros, smoke tests, plans historiques
│ # (v0.12.6 ASVS+PENTEST+REMEDIATION archivés ici 2026-04-23)
├── veza-docs/ # Site Docusaurus séparé
│ ├── docs/current/ # Docs actuelles

View file

@ -237,7 +237,9 @@ Opt-in. Le path upload principal n'utilise pas encore S3 (FUNCTIONAL_AUDIT §4 i
| `AWS_ACCESS_KEY_ID` | (vide) | `config.go:362` | Optionnel si IAM role EC2. |
| `AWS_SECRET_ACCESS_KEY` | (vide) | `config.go:363` | — |
## 13. HLS streaming
## 13. HLS streaming + track storage backend
### HLS
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
@ -248,6 +250,20 @@ Opt-in. Le path upload principal n'utilise pas encore S3 (FUNCTIONAL_AUDIT §4 i
Pattern d'activation : `HLS_STREAMING=true` + stream server up + transcoding pipeline active. Défaut off, fallback direct-stream suffit pour le démarrage.
### Track upload storage backend (v1.0.8 Phase 0)
| Variable | Défaut | Lu à | Rôle |
| --- | --- | --- | --- |
| **`TRACK_STORAGE_BACKEND`** | `local` | `config.go` (bloc S3) | Où `TrackService.UploadTrack()` écrit les fichiers track. Valeurs : `local` (disque, `uploads/tracks/`) ou `s3` (via `S3StorageService`). **En prod, `s3` nécessite `AWS_S3_ENABLED=true`** — boot échoue sinon (`validation.go` règle 11). Dev/staging warn et fallback sur `local`. |
Pour migrer un environnement :
1. Deploy avec `TRACK_STORAGE_BACKEND=local` (aucun changement).
2. Activer S3 : `AWS_S3_ENABLED=true`, `AWS_S3_BUCKET=<bucket>`, `AWS_S3_ENDPOINT=<minio-url>`, credentials.
3. Flipper : `TRACK_STORAGE_BACKEND=s3`. Nouveaux uploads vont sur MinIO/S3, tracks existants restent `local`.
4. (Optionnel) Lancer `cmd/migrate_storage` (Phase 3) pour déplacer les tracks `local` vers `s3`.
Rollback : revert `TRACK_STORAGE_BACKEND=local`, nouveaux uploads repartent en local. Les tracks déjà migrés restent en `s3` (read path les sert via signed URL en Phase 2).
## 14. Stream server (backend ↔ stream)
| Variable | Défaut | Lu à | Rôle |
@ -489,6 +505,8 @@ Fonctionnent encore avec un warning deprecation. **Supprimées en v1.1.0** — m
| `OAUTH_ENCRYPTION_KEY` ≥32 bytes requis en prod | `validation.go:896` |
| `HYPERSWITCH_ENABLED=true` requiert `HYPERSWITCH_API_KEY` + webhook secret | `validation.go:908` |
| `REDIS_URL` doit être explicite en prod (pas de fallback default) | `validation.go:916` |
| `TRACK_STORAGE_BACKEND` must be `local` ou `s3` (règle 11, v1.0.8) | `config.go` `ValidateForEnvironment` |
| `TRACK_STORAGE_BACKEND=s3` requiert `AWS_S3_ENABLED=true` en prod | `config.go` `ValidateForEnvironment` |
| `BYPASS_CONTENT_CREATOR_ROLE=true` refusé en prod | `validation.go:1018` |
Si une règle est violée, boot échoue avec message humain pointant la variable fautive.
@ -500,7 +518,7 @@ Validation manuelle : `./scripts/validate-env.sh development` (ou `production`,
Survey 2026-04-23 a identifié des incohérences entre `.env.template` et le code. Non-bloquants, cleanup v1.1.0.
**Manquant dans template** (le code les lit, le dev peut s'en sortir sans les définir) :
`HLS_STREAMING` (ajouté 2026-04-23), `SLOW_REQUEST_THRESHOLD_MS`, `CONFIG_WATCH`, `HANDLER_TIMEOUT`, `MAX_JSON_BODY_SIZE`, `GEOIP_DB_PATH`, `VAPID_PRIVATE_KEY` / `VAPID_PUBLIC_KEY`, `RTMP_CALLBACK_SECRET`, `NGINX_RTMP_*`, `HARD_DELETE_CRON_ENABLED`, `ELASTICSEARCH_AUTO_INDEX`.
`HLS_STREAMING` (ajouté 2026-04-23), `TRACK_STORAGE_BACKEND` (ajouté 2026-04-23), `SLOW_REQUEST_THRESHOLD_MS`, `CONFIG_WATCH`, `HANDLER_TIMEOUT`, `MAX_JSON_BODY_SIZE`, `GEOIP_DB_PATH`, `VAPID_PRIVATE_KEY` / `VAPID_PUBLIC_KEY`, `RTMP_CALLBACK_SECRET`, `NGINX_RTMP_*`, `HARD_DELETE_CRON_ENABLED`, `ELASTICSEARCH_AUTO_INDEX`.
**Dans template mais non lus** : `REDIS_ADDR`, `REDIS_PASSWORD`, `REDIS_DB`, `JWT_ACCESS_TOKEN_DURATION`, `JWT_REFRESH_TOKEN_DURATION`, `RATE_LIMIT_ENABLED`, `DATABASE_MAX_OPEN_CONNS` / `DATABASE_MAX_IDLE_CONNS` / `DATABASE_CONN_MAX_LIFETIME` (mauvais noms).

View file

@ -48,30 +48,35 @@ export default async function globalSetup() {
// ── Vérification que le rate limiting est désactivé ────────────
// Le backend doit être démarré avec APP_ENV=test ou DISABLE_RATE_LIMIT_FOR_TESTS=true
// Utiliser `make dev-e2e` pour démarrer correctement.
//
// Historique : avant 2026-04-23, ce bloc se contentait d'un console.error
// encadré quand la probe détectait un 429. Les tests continuaient quand
// même, ce qui masquait un lockout progressif des seed users sur
// loginViaAPI (ex. 02-navigation.spec.ts:77 flake). Maintenant on
// hard-fail : si le backend n'est pas en mode test, on abort tout de suite.
let rateLimitActive = false;
try {
// Envoyer 10 requêtes login rapides pour détecter le rate limiting
let got429 = false;
for (let i = 0; i < 10; i++) {
const r = await ctx.post('/api/v1/auth/login', {
data: { email: 'rate-limit-probe@test.invalid', password: 'x' },
});
if (r.status() === 429) { got429 = true; break; }
}
if (got429) {
console.error(
'\n ╔══════════════════════════════════════════════════════════════╗\n' +
' ║ ⚠ RATE LIMITING IS ACTIVE — E2E TESTS WILL BE FLAKY! ║\n' +
' ║ Restart the backend with: make dev-e2e ║\n' +
' ║ This sets APP_ENV=test & DISABLE_RATE_LIMIT_FOR_TESTS=true║\n' +
' ╚══════════════════════════════════════════════════════════════╝\n',
);
} else {
console.log(' Rate limiting: ✓ disabled (test mode)');
if (r.status() === 429) { rateLimitActive = true; break; }
}
} catch {
// Non-blocking — if API is down, the test will fail elsewhere
// Probe failure (API down, network) : on laisse filer, le health check
// ci-dessus aura déjà signalé le problème.
}
if (rateLimitActive) {
throw new Error(
'E2E aborted: rate limiting is ACTIVE on the backend. ' +
'Stop the current backend and restart with `make dev-e2e` ' +
'(sets APP_ENV=test + DISABLE_RATE_LIMIT_FOR_TESTS=true). ' +
'Full env set : tests/e2e/playwright.config.ts webServer.env.',
);
}
console.log(' Rate limiting: ✓ disabled (test mode)');
try {
const health = await ctx.get(`${CONFIG.streamURL}/health`);
console.log(` Stream Server: ${health.ok() ? '✓ OK' : '✗ DOWN'} (${health.status()})`);

View file

@ -150,6 +150,18 @@ HLS_STREAMING=false
# HLS segment storage directory (used only when HLS_STREAMING=true)
HLS_STORAGE_DIR=/tmp/veza-hls
# --- TRACK UPLOAD STORAGE BACKEND (v1.0.8 Phase 0, default local) ---
# Where TrackService.UploadTrack() writes new track files.
# local — writes to veza-backend-api/uploads/tracks/ (legacy behaviour,
# works single-pod, doesn't scale multi-pod)
# s3 — writes to S3/MinIO via S3StorageService. Requires
# AWS_S3_ENABLED=true and AWS_S3_BUCKET. In production, setting
# this to 's3' without S3 enabled fails startup with an explicit
# error (dev/staging warn and fall back to 'local').
# Phase 1 wires this into TrackService; Phase 2 migrates the read path;
# Phase 3 provides a migration CLI for existing local tracks.
TRACK_STORAGE_BACKEND=local
# --- FRONTEND URL ---
# Used for password reset links, email templates, etc.
FRONTEND_URL=http://veza.fr:5173

View file

@ -99,6 +99,13 @@ type Config struct {
S3SecretKey string // Secret key AWS (optionnel, utilise les credentials par défaut si vide)
S3Enabled bool // Activer le stockage S3
// Track upload storage backend (v1.0.8 Phase 0 — MinIO migration)
// "local" (default) = writes to veza-backend-api/uploads/tracks/
// "s3" = writes to S3StorageService bucket. Requires S3Enabled=true.
// Read by TrackService.UploadTrack(); switch is feature-flag-gated so
// operators can roll out per environment and roll back by flipping the env var.
TrackStorageBackend string
// Sentry configuration
SentryDsn string // DSN Sentry pour error tracking
SentryEnvironment string // Environnement Sentry (dev, staging, prod)
@ -363,6 +370,12 @@ func NewConfig() (*Config, error) {
S3SecretKey: getEnv("AWS_SECRET_ACCESS_KEY", ""),
S3Enabled: getEnvBool("AWS_S3_ENABLED", false), // Désactivé par défaut
// Track upload storage backend (v1.0.8 Phase 0)
// "local" keeps the legacy path (writes to veza-backend-api/uploads/).
// "s3" routes uploads through S3StorageService. Validated in
// ValidateForEnvironment() — s3 requires S3Enabled.
TrackStorageBackend: getEnv("TRACK_STORAGE_BACKEND", "local"),
// Sentry configuration
SentryDsn: getEnv("SENTRY_DSN", ""),
SentryEnvironment: env, // Utiliser l'environnement détecté
@ -917,6 +930,21 @@ func (c *Config) ValidateForEnvironment() error {
return fmt.Errorf("REDIS_URL must be explicitly set in production. A missing value lets the app boot against the default host and silently degrade to in-memory fallbacks that break cross-pod features")
}
// 11. v1.0.8: TRACK_STORAGE_BACKEND must be a known value, and "s3"
// requires the S3 stack to be enabled. Without this guard, a typo
// ("S3", "minio", "bucket") would fall through to local without
// warning; worse, TRACK_STORAGE_BACKEND=s3 with AWS_S3_ENABLED=false
// would crash when the upload path tries to resolve S3StorageService.
switch c.TrackStorageBackend {
case "local", "s3":
// OK
default:
return fmt.Errorf("TRACK_STORAGE_BACKEND must be 'local' or 's3', got %q", c.TrackStorageBackend)
}
if c.TrackStorageBackend == "s3" && !c.S3Enabled {
return fmt.Errorf("TRACK_STORAGE_BACKEND=s3 requires AWS_S3_ENABLED=true and AWS_S3_BUCKET set. Enable the S3 stack or switch TRACK_STORAGE_BACKEND back to 'local'")
}
case EnvTest:
// TEST: Validation adaptée aux tests
// CORS peut être vide ou configuré explicitement
@ -931,6 +959,17 @@ func (c *Config) ValidateForEnvironment() error {
break
}
}
// v1.0.8: TRACK_STORAGE_BACKEND sanity — dev/staging uses the same
// switch as prod, but we warn instead of fail so operators can iterate.
if c.TrackStorageBackend != "local" && c.TrackStorageBackend != "s3" {
c.Logger.Warn("TRACK_STORAGE_BACKEND has unexpected value, falling back to 'local'",
zap.String("value", c.TrackStorageBackend))
c.TrackStorageBackend = "local"
}
if c.TrackStorageBackend == "s3" && !c.S3Enabled {
c.Logger.Warn("TRACK_STORAGE_BACKEND=s3 but AWS_S3_ENABLED=false, falling back to 'local' for track uploads")
c.TrackStorageBackend = "local"
}
}
return nil

View file

@ -36,6 +36,12 @@ type Track struct {
StatusMessage string `gorm:"type:text" json:"status_message,omitempty" db:"status_message"`
StreamStatus string `gorm:"default:'pending'" json:"stream_status" db:"stream_status"` // pending, processing, ready, error
StreamManifestURL string `gorm:"size:500" json:"stream_manifest_url" db:"stream_manifest_url"`
// v1.0.8 Phase 0 — multi-backend storage. Schema added in migration 985.
// Values: "local" (default, file_path is the local FS path) or "s3"
// (storage_key is the S3/MinIO object key inside config.S3Bucket).
// Hidden from JSON responses — internal plumbing only.
StorageBackend string `gorm:"size:16;default:'local';not null" json:"-" db:"storage_backend"`
StorageKey *string `gorm:"size:512" json:"-" db:"storage_key"`
// SECURITY(CRIT-002): play_count and like_count are PRIVATE — visible only to the creator
// in their analytics dashboard. Never exposed in public API responses.
// Ref: CLAUDE.md rule #4, ORIGIN_UI_UX_SYSTEM.md §13

View file

@ -0,0 +1,43 @@
-- 985_tracks_storage_backend.sql
-- v1.0.8 Phase 0 (MinIO upload migration) — prepare tracks schema for
-- multi-backend storage (local FS vs S3/MinIO). Schema-only change; Phase 1
-- will wire TrackService.UploadTrack() to actually route writes via the
-- storage_backend column.
--
-- Context: FUNCTIONAL_AUDIT §4 item 2 flagged "stockage local disque only"
-- as a multi-pod prod blocker. The S3 client exists but is never called on
-- the upload path. This migration makes it possible to gradually migrate
-- uploads without a big-bang switchover.
--
-- Columns added:
-- storage_backend — VARCHAR(16) NOT NULL DEFAULT 'local'
-- Which storage impl wrote this track. 'local' = file_path points at
-- veza-backend-api/uploads/tracks/... . 's3' = storage_key holds the
-- S3 object key inside config.S3Bucket. Check constraint limits values.
--
-- storage_key — VARCHAR(512) NULL
-- When storage_backend='s3', this is the S3 object key (e.g.
-- "tracks/<userID>/<uuid>.<ext>"). NULL when storage_backend='local'
-- (file_path carries the info in that case — kept for back-compat).
--
-- Index:
-- idx_tracks_storage_backend_s3 — partial index on s3-backed rows only.
-- Useful for the migration script (Phase 3) which sweeps local→s3 and
-- for monitoring queries (what % of tracks already migrated).
ALTER TABLE public.tracks
ADD COLUMN storage_backend VARCHAR(16) NOT NULL DEFAULT 'local'
CHECK (storage_backend IN ('local', 's3')),
ADD COLUMN storage_key VARCHAR(512);
-- Partial index: only indexes s3-backed rows (majority stay 'local' during
-- migration, so full index would be wasteful).
CREATE INDEX idx_tracks_storage_backend_s3
ON public.tracks(storage_backend)
WHERE storage_backend = 's3';
COMMENT ON COLUMN public.tracks.storage_backend IS
'Storage impl that owns the file: local (disk) or s3 (MinIO/S3). v1.0.8 multi-backend prep.';
COMMENT ON COLUMN public.tracks.storage_key IS
'S3 object key when storage_backend=s3. NULL when storage_backend=local (file_path carries the info).';

View file

@ -0,0 +1,12 @@
-- Rollback for 985_tracks_storage_backend.sql
-- WARNING: destroys storage_backend + storage_key data. Only safe if no
-- track has been migrated to s3 (i.e. all rows still storage_backend='local'
-- which is the default). Run a sanity query before rollback in prod:
-- SELECT COUNT(*) FROM public.tracks WHERE storage_backend = 's3';
-- Must return 0.
DROP INDEX IF EXISTS public.idx_tracks_storage_backend_s3;
ALTER TABLE public.tracks
DROP COLUMN IF EXISTS storage_key,
DROP COLUMN IF EXISTS storage_backend;