veza/infra/ansible/roles/minio_distributed/tasks/main.yml
senke d86815561c
Some checks failed
Veza CI / Rust (Stream Server) (push) Successful in 5m21s
Security Scan / Secret Scanning (gitleaks) (push) Failing after 54s
Veza CI / Backend (Go) (push) Failing after 8m27s
Veza CI / Notify on failure (push) Successful in 6s
E2E Playwright / e2e (full) (push) Failing after 12m42s
Veza CI / Frontend (Web) (push) Successful in 15m49s
feat(infra): MinIO distributed EC:2 + migration script (W3 Day 12)
Four-node distributed MinIO cluster, single erasure set EC:2, tolerates
2 simultaneous node losses. 50% storage efficiency. Pinned to
RELEASE.2025-09-07T16-13-09Z to match docker-compose so dev/prod
parity is preserved.

- infra/ansible/roles/minio_distributed/ : install pinned binary,
  systemd unit pointed at MINIO_VOLUMES with bracket-expansion form,
  EC:2 forced via MINIO_STORAGE_CLASS_STANDARD. Vault assertion
  blocks shipping placeholder credentials to staging/prod.
- bucket init : creates veza-prod-tracks, enables versioning, applies
  lifecycle.json (30d noncurrent expiry + 7d abort-multipart). Cold-tier
  transition ready but inert until minio_remote_tier_name is set.
- infra/ansible/playbooks/minio_distributed.yml : provisions the 4
  containers, applies common baseline + role.
- infra/ansible/inventory/lab.yml : new minio_nodes group.
- infra/ansible/tests/test_minio_resilience.sh : kill 2 nodes,
  verify EC:2 reconstruction (read OK + checksum matches), restart,
  wait for self-heal.
- scripts/minio-migrate-from-single.sh : mc mirror --preserve from
  the single-node bucket to the new cluster, count-verifies, prints
  rollout next-steps.
- config/prometheus/alert_rules.yml : MinIODriveOffline (warn) +
  MinIONodesUnreachable (page) — page fires at >= 2 nodes unreachable
  because that's the redundancy ceiling for EC:2.
- docs/ENV_VARIABLES.md §12 : MinIO migration cross-ref.

Acceptance (Day 12) : EC:2 survives 2 concurrent kills + self-heals.
Lab apply pending. No backend code change — interface stays AWS S3.

W3 progress : Redis Sentinel ✓ (Day 11), MinIO distribué ✓ (this),
CDN  Day 13, DMCA  Day 14, embed  Day 15.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-28 13:46:42 +02:00

178 lines
5.4 KiB
YAML

# minio_distributed role — installs MinIO server (versioned), drops
# the systemd unit pointing at all 4 nodes via MINIO_VOLUMES, starts
# the cluster. Idempotent.
#
# After every node converges, a one-shot init task on the FIRST node
# in `minio_nodes` creates the prod bucket + applies versioning +
# lifecycle. Running it on a single node is sufficient — MinIO
# replicates bucket metadata across the erasure set.
---
- name: Vault placeholders are overridden in non-lab envs
ansible.builtin.assert:
that:
- minio_root_user != "CHANGE_ME_VAULT"
- minio_root_password != "CHANGE_ME_VAULT_PASSWORD"
fail_msg: |
minio_root_user / minio_root_password still hold placeholder
values. Provide them via group_vars/minio_ha.vault.yml (encrypted)
before applying this role to staging or prod.
when: (deploy_env | default("lab")) != "lab"
tags: [minio, assert]
- name: Ensure minio user
ansible.builtin.user:
name: minio
system: true
home: "{{ minio_data_path }}"
shell: /usr/sbin/nologin
create_home: true
tags: [minio, install]
- name: Ensure data + config directories
ansible.builtin.file:
path: "{{ item }}"
state: directory
owner: minio
group: minio
mode: "0750"
loop:
- "{{ minio_data_path }}"
- "{{ minio_etc }}"
tags: [minio, install]
- name: Check installed MinIO version
ansible.builtin.stat:
path: "/usr/local/bin/minio-{{ minio_version }}"
register: minio_installed
tags: [minio, install]
- name: Download MinIO server binary (versioned)
ansible.builtin.get_url:
url: "https://dl.min.io/server/minio/release/linux-{{ minio_arch }}/archive/minio.{{ minio_version }}"
dest: "/usr/local/bin/minio-{{ minio_version }}"
mode: "0755"
owner: root
group: root
when: not minio_installed.stat.exists
tags: [minio, install]
- name: Symlink /usr/local/bin/minio → versioned binary
ansible.builtin.file:
src: "/usr/local/bin/minio-{{ minio_version }}"
dest: /usr/local/bin/minio
state: link
force: true
notify: Restart minio
tags: [minio, install]
- name: Check installed mc client version
ansible.builtin.stat:
path: "/usr/local/bin/mc-{{ minio_mc_version }}"
register: mc_installed
tags: [minio, install]
- name: Download mc client (versioned, used by bucket init task)
ansible.builtin.get_url:
url: "https://dl.min.io/client/mc/release/linux-{{ minio_arch }}/archive/mc.{{ minio_mc_version }}"
dest: "/usr/local/bin/mc-{{ minio_mc_version }}"
mode: "0755"
owner: root
group: root
when: not mc_installed.stat.exists
tags: [minio, install]
- name: Symlink /usr/local/bin/mc → versioned binary
ansible.builtin.file:
src: "/usr/local/bin/mc-{{ minio_mc_version }}"
dest: /usr/local/bin/mc
state: link
force: true
tags: [minio, install]
- name: Render /etc/default/minio
ansible.builtin.template:
src: minio.env.j2
dest: /etc/default/minio
owner: root
group: minio
mode: "0640"
notify: Restart minio
tags: [minio, config]
- name: Render systemd unit
ansible.builtin.template:
src: minio.service.j2
dest: /etc/systemd/system/minio.service
owner: root
group: root
mode: "0644"
notify: Restart minio
tags: [minio, service]
- name: Enable + start minio
ansible.builtin.systemd:
name: minio
state: started
enabled: true
daemon_reload: true
tags: [minio, service]
# -----------------------------------------------------------------------
# Bucket + lifecycle init — runs once, on the first node only. The
# erasure-coded cluster syncs metadata across nodes so we don't need
# to repeat this everywhere.
# -----------------------------------------------------------------------
- name: Wait for MinIO API to accept connections (every node)
ansible.builtin.wait_for:
host: "{{ ansible_default_ipv4.address | default('127.0.0.1') }}"
port: "{{ minio_port }}"
timeout: 60
tags: [minio, init]
- name: Render lifecycle policy
ansible.builtin.template:
src: lifecycle.json.j2
dest: "{{ minio_etc }}/lifecycle.json"
owner: root
group: minio
mode: "0640"
when: inventory_hostname == groups['minio_nodes'][0]
tags: [minio, init]
- name: Configure mc alias for the local cluster
ansible.builtin.command:
cmd: >-
/usr/local/bin/mc alias set veza-local
http://localhost:{{ minio_port }}
{{ minio_root_user }} {{ minio_root_password }}
changed_when: false
no_log: true
when: inventory_hostname == groups['minio_nodes'][0]
tags: [minio, init]
- name: Create the prod bucket if it doesn't exist
ansible.builtin.command:
cmd: /usr/local/bin/mc mb --ignore-existing veza-local/{{ minio_bucket_tracks }}
register: mc_mb
changed_when: "'Bucket created successfully' in mc_mb.stdout"
when: inventory_hostname == groups['minio_nodes'][0]
tags: [minio, init]
- name: Enable versioning on the prod bucket
ansible.builtin.command:
cmd: /usr/local/bin/mc version enable veza-local/{{ minio_bucket_tracks }}
changed_when: false
when: inventory_hostname == groups['minio_nodes'][0]
tags: [minio, init]
- name: Apply lifecycle policy
ansible.builtin.command:
cmd: >-
/usr/local/bin/mc ilm import
veza-local/{{ minio_bucket_tracks }}
< {{ minio_etc }}/lifecycle.json
args:
executable: /bin/bash
changed_when: false
when: inventory_hostname == groups['minio_nodes'][0]
tags: [minio, init]