Replaces the historical chunked-upload flow when TRACK_STORAGE_BACKEND=s3:
before: chunks → assembled file on disk → MigrateLocalToS3IfConfigured
opens the file → manager.Uploader streams in 10 MB parts
after: chunks → io.Pipe → manager.Uploader streams in 10 MB parts
(no assembled file on local disk)
Eliminates the second local copy of every upload and ~500 MB of disk
I/O per concurrent 500 MB upload. The local-storage path
(TRACK_STORAGE_BACKEND=local, default) is unchanged — it still goes
through CompleteChunkedUpload + CreateTrackFromPath because ClamAV needs
the assembled file (chunked path skips ClamAV by design, see audit).
New surface:
- TrackChunkService.StreamChunkedUpload(ctx, uploadID, dst io.Writer)
— extracted from CompleteChunkedUpload, writes chunks in order to
any io.Writer, computes SHA-256 + verifies expected size, cleans
up Redis state on success and preserves it on failure (resumable).
- TrackService.CreateTrackFromChunkedUploadToS3 — orchestrates
io.Pipe + goroutine, deletes orphan S3 objects on assembly failure,
creates the Track row with storage_backend=s3 + storage_key.
Tests: 4 chunk-service stream tests (happy / writer error / size
mismatch / delegation) + 4 service tests (happy / wrong backend /
stream error / S3 upload error). One E2E @critical-s3 spec gated on
S3 availability via /health/deep so it ships today and starts running
once MinIO is added to the e2e workflow services block.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>