The realtime effects loop in src/audio/realtime.rs was using
`eprintln!` to surface effect processing errors. That bypasses the
tracing subscriber and so the error never reaches the OTel collector
or the structured-log pipeline — invisible to operators in prod.
Switched to `tracing::error!` with the error captured as a structured
field, matching the rest of the stream server.
Why this was the only console-style call to fix:
The earlier audit reported 23 `console.log` instances across the
codebase, but most were in JSDoc/Markdown blocks or commented-out
lines. The actual production-code count, after stripping comments,
was zero on the frontend, zero in the backend API server (the
`fmt.Print*` calls live in CLI tools under cmd/ and are legitimate),
and one in the stream server (this fix). The rest of the Rust
println! calls are in load-test binaries and #[cfg(test)] blocks.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Audit cross-checked against active composes shows three dormant compose
files that duplicate functionality already covered by the canonical
docker-compose.{,dev,prod,staging,test}.yml at the repo root. None are
referenced from Make targets, scripts, or CI workflows. They have
diverged from the active set (different ports, older Postgres version,
no shared volume names, etc.) and are a footgun for new contributors.
Files marked DEPRECATED with a header pointing at the canonical compose
to use instead:
veza-stream-server/docker-compose.yml
Standalone stream-server compose. Same service is provided by the
root docker-compose.yml under the `docker-dev` profile.
infra/docker-compose.lab.yml
Lab Postgres on default port 5432. Conflicts with a host Postgres on
most setups; root docker-compose.dev.yml uses non-default ports for
a reason.
config/docker/docker-compose.local.yml
Local Postgres 15 variant on port 5433. Redundant with root
docker-compose.dev.yml (Postgres 16, project-wide port mapping).
Not in this commit (intentionally limited J6 scope, per audit plan
"verify, don't refactor"):
- No `extends:` consolidation across the active composes — that is a
1-2 day refactor on its own and not a v1.0.4 concern.
- The five active composes were syntactically validated locally
(docker compose config); production and staging both require
operator-injected env vars (DB_PASS, S3_*, RABBITMQ_PASS, etc.)
which is the intended behavior, not a bug.
- Cross-compose audit confirms zero references to the removed
chat-server or any other dead service / image. Only one residual
deprecation warning across all active composes: the obsolete
`version:` field on docker-compose.{prod,test,test}.yml — cosmetic,
not blocking.
- Test suite verification (Go / Rust / Vitest) deferred to Forgejo CI
rather than re-running locally. The pre-push hook + remote pipeline
will gate the next push.
Follow-up candidates (not blocking v1.0.4):
- Delete the three deprecated files once a 2-month grace period
confirms no local dev workflow references them.
- Drop the obsolete `version:` field across the active composes.
Refs: AUDIT_REPORT.md §6.1, §10 P7
Two fixes surfaced by run #55:
1. veza-stream-server (47 files): cargo fmt had been run locally but
never committed — the working tree was clean locally while HEAD
had unformatted code. CI's `cargo fmt -- --check` caught the drift.
This commit lands the formatting that was already staged.
2. ci.yml Install Go tools: `go install .../cmd/golangci-lint@latest`
resolves to v1.64.8 (the old /cmd/ module path). The repo's
.golangci.yml is v2-format, so v1 refuses with:
"you are using a configuration file for golangci-lint v2
with golangci-lint v1: please use golangci-lint v2"
Switch to the /v2/cmd/ path so @latest actually gets v2.x.
Clippy `-D warnings` rejected `vec![...]` for a fixed-size array literal
used only as `.iter().all(...)`. Replacing with a stack array unblocks
rust-ci and stream-ci jobs which both run `cargo clippy --all-targets`.
MEDIUM-002: Remove manual X-Forwarded-For parsing in metrics_protection.go,
use c.ClientIP() only (respects SetTrustedProxies)
MEDIUM-003: Pin ClamAV Docker image to 1.4 across all compose files
MEDIUM-004: Add clampLimit(100) to 15+ handlers that parsed limit directly
MEDIUM-006: Remove unsafe-eval from CSP script-src on Swagger routes
MEDIUM-007: Pin all GitHub Actions to SHA in 11 workflow files
MEDIUM-008: Replace rabbitmq:3-management-alpine with rabbitmq:3-alpine in prod
MEDIUM-009: Add trial-already-used check in subscription service
MEDIUM-010: Add 60s periodic token re-validation to WebSocket connections
MEDIUM-011: Mask email in auth handler logs with maskEmail() helper
MEDIUM-012: Add k-anonymity threshold (k=5) to playback analytics stats
LOW-001: Align frontend password policy to 12 chars (matching backend)
LOW-003: Replace deprecated dotenv with dotenvy crate in Rust stream server
LOW-004: Enable xpack.security in Elasticsearch dev/local compose files
LOW-005: Accept context.Context in CleanupExpiredSessions instead of Background()
LOW-002: Noted — Hyperswitch version update deferred (requires payment integration tests)
29/30 findings remediated. 1 noted (LOW-002).
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- ORDER BY dynamiques : whitelist explicite, fallback created_at DESC
- Login/register soumis au rate limiter global
- VERSION sync + check CI
- Nettoyage références veza-chat-server
- Go 1.24 partout (Dockerfile, workflows)
- TODO/FIXME/HACK convertis en issues ou résolus
INT-05: 26 tests in chat-server (config, error, permissions, rate
limiter, logging, utils) and 25 tests in stream-server (config,
error, auth, HLS, signature, utils). All test pure logic.
- Remove sqlx-data.json from .dockerignore to allow SQLX_OFFLINE build
- Use repo root as build context for veza-common path dependency
- Add SQLX_OFFLINE=true and COPY sqlx-data.json in Dockerfile
- Add hls_auth_middleware in stream server (Bearer + ?token=)
- Apply auth to /hls/:track_id/* routes
- Update frontend hlsService to use stream server URL + pass JWT via xhrSetup
- Add getHLSXhrSetup() and getHLSURLWithToken() for hls.js integration
- Add VITE_HLS_BASE_URL config (derived from VITE_STREAM_URL when unset)
- Add unit tests for token extraction and HLS helpers
- Mark audit item 1.3 as done
- Add turbo devDependency and packageManager to root
- Create turbo.json with build, test, lint pipeline
- Add package.json to veza-backend-api, veza-chat-server, veza-stream-server
- Extend workspaces to include Go and Rust services
- Migrate CI to use turbo run for build, test, lint
- Remove src/eventbus/ directory (orphan — event_bus.rs is the active module)
- Remove src/prometheus_metrics.rs (orphan duplicate — monitoring/prometheus_metrics.rs is active)
- Remove src/core/sync.rs_test_snippet (leftover artifact)
Stream server compiles with zero errors.
Co-authored-by: Cursor <cursoragent@cursor.com>
- Fix chat-ci.yml and stream-ci.yml to reference veza-chat-server/
and veza-stream-server/ instead of non-existent apps/ paths
- Add veza-common/ to CI triggers so shared library changes are tested
- Reactivate CD pipeline with Docker registry push and Kubernetes
deployment steps (gated on secrets availability)
- Standardize Redis dependency to v0.32 across both Rust services
Co-authored-by: Cursor <cursoragent@cursor.com>
- Add IsURLSafe() function to webhook service blocking private IPs,
localhost, and cloud metadata endpoints (SSRF protection)
- Implement real validate_track_access() in stream server querying DB
for track visibility, ownership, and purchase status
- Remove dangerous JWT fallback user in chat server that allowed
deleted users to maintain access with forged credentials
- Add upper limit (100) on pagination in profile, track, and room handlers
- Fix Dockerfile.production healthcheck path to /api/v1/health
Co-authored-by: Cursor <cursoragent@cursor.com>
Replace the stub check_rate_limit() that always returned true with a
real implementation backed by the governor crate (already in Cargo.toml).
- Per-IP keyed rate limiter with DashMap store
- Default quota: 120 requests/minute per IP
- Sliding window enforcement via governor's GCRA algorithm
- Returns 429 Too Many Requests when limit exceeded
Addresses audit findings D7, A04: missing rate limiting on stream-server.
Co-authored-by: Cursor <cursoragent@cursor.com>
- Convert all sqlx::query!() and sqlx::query_scalar!() compile-time
macros to runtime sqlx::query() and sqlx::query_scalar() with .bind()
- Affected files: segment_tracker.rs, processor.rs, callbacks.rs
- This removes the dependency on .sqlx/ directory for offline mode
- Update Dockerfile to remove SQLX_OFFLINE=true and .sqlx COPY
- Stream server can now compile without a live database connection
The compile-time macros required either a DATABASE_URL at build time or
a .sqlx directory with cached query metadata (neither was available).
Runtime queries trade compile-time SQL validation for buildability.
Addresses audit finding: debt item 1 (stream server compilation).
Co-authored-by: Cursor <cursoragent@cursor.com>
- Change default ALLOWED_ORIGINS from wildcard (*) to localhost:5173
in veza-stream-server/docker-compose.yml
- Also fixed local .env (untracked) to use specific dev domains
Previously, the stream-server docker-compose defaulted to ALLOWED_ORIGINS=*
which would allow any origin to access the streaming API.
Addresses audit finding: A05 (Security Misconfiguration) — HIGH.
Co-authored-by: Cursor <cursoragent@cursor.com>
- Validate JWT token via AuthManager before accepting WebSocket connections
- Extract user_id from validated token claims instead of trusting query params
- Reject unauthenticated connections with 401 Unauthorized
- Add `authenticated` field to WebSocketConnection struct
- Update websocket_handler_wrapper to handle auth error responses
Previously, the WebSocket handler accepted all connections without
validating the token (comment: "pour l'instant, on accepte la connexion").
Now requires a valid JWT token via ?token= query param or Authorization header.
Addresses audit finding: A01 (Broken Access Control) — CRITICAL.
Co-authored-by: Cursor <cursoragent@cursor.com>
- Add SessionRevocationStore trait with InMemoryRevocationStore and RedisRevocationStore
- Wire Redis store when REDIS_URL in config.cache, fallback in-memory
- Session revocation by session_id persists across restarts when using Redis
Co-authored-by: Cursor <cursoragent@cursor.com>