The prod and staging compose files were passing AWS_S3_ENDPOINT,
AWS_S3_BUCKET, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY but NOT
the two flags that actually activate the routing:
- AWS_S3_ENABLED (default false in code → S3 stack skipped)
- TRACK_STORAGE_BACKEND (default "local" in code → uploads to disk)
So both prod and staging deploys were silently writing track uploads
to local disk despite the apparent S3 wiring. With blue/green
active/active behind HAProxy, that's an HA bug — uploads on the blue
pod aren't visible to green and vice-versa.
Set both flags in:
- docker-compose.staging.yml backend service (1 instance)
- docker-compose.prod.yml backend_blue + backend_green (2 instances,
same env block via replace_all)
The code already validates on startup that TRACK_STORAGE_BACKEND=s3
requires AWS_S3_ENABLED=true (config.go:1040-1042) so a partial
config now fails-loud instead of falling back to local.
The S3StorageService is already implemented (services/s3_storage_service.go)
and wired into TrackService.UploadTrack via the storageBackend dispatcher
(core/track/service.go:432). HLS segment output remains on the
hls_*_data volume — that's a separate concern (stream server local
write), out of scope for this compose-only fix.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
MinIO images were pinned to `:latest` in 4 compose files — supply-
chain risk (auto-updates on every `docker compose pull`, bit-rot if
upstream changes behavior). Pin to dated RELEASE.* tags documented
by MinIO (conservative Sep 2025 release).
Changed:
docker-compose.yml ×2 (minio + mc)
docker-compose.dev.yml ×2
docker-compose.prod.yml ×2
docker-compose.staging.yml ×2
Tags:
minio/minio:RELEASE.2025-09-07T16-13-09Z
minio/mc:RELEASE.2025-09-07T05-25-40Z
Operator should bump to latest verified release when they next
revisit infra. Tag chosen conservatively — if it does not exist in
local Docker cache, `docker compose pull` will surface the error
immediately (safer than silent drift).
Refs: AUDIT_REPORT.md §6.1 Dette 1 (MinIO :latest 4 occurrences).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
- Add HLSEnabled and HLSStorageDir to backend config (HLS_STREAMING env)
- Register HLS serving routes (master.m3u8, quality playlist, segments)
behind HLSEnabled feature flag on existing track routes
- Add GetHLSStatus and TriggerHLSTranscode methods to StreamService
for stream server communication
- Update docker-compose (dev, staging, prod) with HLS env vars and
shared hls-data volume between backend and stream-server
- Stream callback already correctly updates stream_manifest_url
Production (docker-compose.prod.yml):
- Change sslmode=disable to sslmode=require on all 3 DATABASE_URLs
- Replace JWT_SECRET fallback defaults with :? syntax (fails if unset)
- Replace DB_PASS default 'password' with :? syntax (fails if unset)
- Separate RABBITMQ_PASS from DB_PASS, require explicit setting
Staging (docker-compose.staging.yml):
- Add sslmode=require to DATABASE_URL
- Replace all default passwords with :? syntax (fails if unset)
docker-compose up with these files will now FAIL if required secrets
are not explicitly provided via environment variables.
Addresses audit findings: A02 (Cryptographic Failures), section 7 (Infra).
Co-authored-by: Cursor <cursoragent@cursor.com>