# `group_vars/` layout Three layers, in order of precedence (later wins): 1. `all/main.yml` — defaults shared across every inventory. Cross-cutting values like SSH hardening, monitoring agent version, and the Veza deploy contract (artifact URL, base image, ports, health probes). 2. `.yml` — environment overrides. Today: `staging.yml`, `prod.yml` (and `lab.yml` would live here too if `inventory/lab.yml` ever referenced an `all/lab` group). Targets that pin the Incus host, container prefix, public domain, log level, feature flags. 3. `all/vault.yml` — encrypted secrets (Ansible Vault). All entries prefixed `vault_*`. Plaintext template at `all/vault.yml.example`. ## Bootstrapping the vault The vault file is **not** committed at first. To stand it up: ```bash cd infra/ansible cp group_vars/all/vault.yml.example group_vars/all/vault.yml $EDITOR group_vars/all/vault.yml # fill in placeholders ansible-vault encrypt group_vars/all/vault.yml echo "" > .vault-pass chmod 0400 .vault-pass ``` `.vault-pass` is gitignored — never commit it. The Forgejo runner gets the same password from the `ANSIBLE_VAULT_PASSWORD` repo secret (see `.forgejo/workflows/deploy.yml`). To edit later without decrypting on disk: ```bash ansible-vault edit group_vars/all/vault.yml ``` To rotate the password (e.g., when an operator leaves): ```bash ansible-vault rekey group_vars/all/vault.yml echo "" > .vault-pass # update Forgejo secret ANSIBLE_VAULT_PASSWORD to the new value ``` ## How variables flow into containers ``` [Ansible runtime] [Container] group_vars/all/main.yml ┐ group_vars/.yml ├──→ roles/veza_app/templates/*.j2 ──→ /etc/veza/.env group_vars/all/vault.yml ┘ ──→ /etc/veza/secrets/jwt-private.pem ──→ systemd unit (EnvironmentFile=) ``` The systemd unit then reads `/etc/veza/.env` at start time. Reload semantics: a config change re-templates the env file and notifies the systemd handler, which restarts the unit. ## What lives in `host_vars/`? `host_vars/.yml` for **per-host** overrides — typically when one container in an HA group needs a slightly different config (e.g., the postgres-primary needs `pg_auto_failover_role: node`, the monitor needs `pg_auto_failover_role: monitor`). The lab inventory inlines these as host-level vars; `host_vars/` exists for cases where they shouldn't bloat the inventory file.