|
Some checks failed
Veza CI / Backend (Go) (push) Failing after 0s
Veza CI / Frontend (Web) (push) Failing after 0s
Veza CI / Rust (Stream Server) (push) Failing after 0s
Security Scan / Secret Scanning (gitleaks) (push) Failing after 0s
Veza CI / Notify on failure (push) Failing after 0s
Closes MinIO Phase 3: ops path for migrating existing tracks.
Usage:
export DATABASE_URL=... AWS_S3_BUCKET=... AWS_S3_ENDPOINT=... ...
migrate_storage --dry-run --limit=10 # plan a batch
migrate_storage --batch-size=50 --limit=500 # migrate first 500
migrate_storage --delete-local=true # also rm local files
Design:
- Idempotent: WHERE storage_backend='local' + per-row DB update means
a crashed run resumes cleanly without duplicating uploads.
- Streaming upload via S3StorageService.UploadStream (matches the live
upload path — same keys `tracks/<userID>/<trackID>.<ext>`, same MIME
resolution).
- Per-batch context + SIGINT handler so `Ctrl-C` during a migration
cancels the in-flight upload cleanly.
- Global `--timeout-min=30` safety cap.
- `--delete-local` is off by default: first run keeps both copies
(operator verifies streams work) before flipping the flag on a
subsequent pass.
- Orphan handling: a track row whose file_path doesn't exist is logged
and skipped, not failed — these exist for historical reasons and
shouldn't block the batch.
Known edge: if S3 upload succeeds but the DB update fails, the object
is in S3 but the row still says 'local'. Log message spells out the
reconcile query. v1.0.9 could add a verification pass.
Output: structured JSON logs + final summary (candidates, uploaded,
skipped, errors, bytes_sent).
Refs: plan Batch A step A6, migration 985 schema (Phase 0,
|
||
|---|---|---|
| .. | ||
| main.go | ||