veza/infra/ansible/roles/minio_distributed/templates/minio.env.j2
senke d86815561c
Some checks failed
Veza CI / Rust (Stream Server) (push) Successful in 5m21s
Security Scan / Secret Scanning (gitleaks) (push) Failing after 54s
Veza CI / Backend (Go) (push) Failing after 8m27s
Veza CI / Notify on failure (push) Successful in 6s
E2E Playwright / e2e (full) (push) Failing after 12m42s
Veza CI / Frontend (Web) (push) Successful in 15m49s
feat(infra): MinIO distributed EC:2 + migration script (W3 Day 12)
Four-node distributed MinIO cluster, single erasure set EC:2, tolerates
2 simultaneous node losses. 50% storage efficiency. Pinned to
RELEASE.2025-09-07T16-13-09Z to match docker-compose so dev/prod
parity is preserved.

- infra/ansible/roles/minio_distributed/ : install pinned binary,
  systemd unit pointed at MINIO_VOLUMES with bracket-expansion form,
  EC:2 forced via MINIO_STORAGE_CLASS_STANDARD. Vault assertion
  blocks shipping placeholder credentials to staging/prod.
- bucket init : creates veza-prod-tracks, enables versioning, applies
  lifecycle.json (30d noncurrent expiry + 7d abort-multipart). Cold-tier
  transition ready but inert until minio_remote_tier_name is set.
- infra/ansible/playbooks/minio_distributed.yml : provisions the 4
  containers, applies common baseline + role.
- infra/ansible/inventory/lab.yml : new minio_nodes group.
- infra/ansible/tests/test_minio_resilience.sh : kill 2 nodes,
  verify EC:2 reconstruction (read OK + checksum matches), restart,
  wait for self-heal.
- scripts/minio-migrate-from-single.sh : mc mirror --preserve from
  the single-node bucket to the new cluster, count-verifies, prints
  rollout next-steps.
- config/prometheus/alert_rules.yml : MinIODriveOffline (warn) +
  MinIONodesUnreachable (page) — page fires at >= 2 nodes unreachable
  because that's the redundancy ceiling for EC:2.
- docs/ENV_VARIABLES.md §12 : MinIO migration cross-ref.

Acceptance (Day 12) : EC:2 survives 2 concurrent kills + self-heals.
Lab apply pending. No backend code change — interface stays AWS S3.

W3 progress : Redis Sentinel ✓ (Day 11), MinIO distribué ✓ (this),
CDN  Day 13, DMCA  Day 14, embed  Day 15.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 13:46:42 +02:00

28 lines
1.3 KiB
Django/Jinja

# Managed by Ansible — do not edit by hand.
# Distributed MinIO env file. Same values on every node — MinIO uses
# MINIO_VOLUMES (set on the systemd unit ExecStart line via this env)
# to discover the cluster topology.
MINIO_ROOT_USER={{ minio_root_user }}
MINIO_ROOT_PASSWORD={{ minio_root_password }}
# Cluster topology — bracket-expansion form. MinIO expands
# minio-{1...4}.lxd into the 4 hostnames + dials each on port 9000.
# Single drive per node = single erasure set of 4 drives.
MINIO_VOLUMES="http://minio-{1...{{ groups['minio_nodes'] | length }}}.lxd:{{ minio_port }}{{ minio_data_path }}"
# Force EC:2 on the standard storage class. Without this, MinIO
# auto-picks based on drive count ; pinning makes the policy explicit.
MINIO_STORAGE_CLASS_STANDARD={{ minio_storage_class_standard }}
# Console UI binds on a separate port so the firewall can isolate it
# from public S3 traffic. Behind a reverse proxy in prod.
MINIO_OPTS="--console-address :{{ minio_console_port }}"
# Prometheus metrics — enabled with bearer auth disabled for the
# local Incus bridge. mTLS is W4 territory.
MINIO_PROMETHEUS_AUTH_TYPE=public
# Browser banner — shows in the console so operators know which
# instance they're poking at.
MINIO_BROWSER_REDIRECT_URL=http://{{ ansible_hostname }}.lxd:{{ minio_console_port }}