chore(ansible): inventory/staging.yml + prod.yml — fill in R720 phase-1 topology

Replace the TODO_HETZNER_IP / TODO_PROD_IP placeholders with the
container topology the W5+ deploy pipeline expects.

Both inventories now declare :
  incus_hosts          the R720 (10.0.20.150 — operator updates
                       to the actual address before first deploy)
  haproxy              one persistent container ; per-deploy reload
                       only, never destroyed
  veza_app_backend     {prefix}backend-{blue,green,tools}
  veza_app_stream      {prefix}stream-{blue,green}
  veza_app_web         {prefix}web-{blue,green}
  veza_data            {prefix}{postgres,redis,rabbitmq,minio}

  All non-host groups set
    ansible_connection: community.general.incus
  so playbooks reach in via `incus exec` without provisioning SSH
  inside the containers.

Naming convention diverges per env to match what's already
established in the codebase :
  staging :  veza-staging-<component>[-<color>]
  prod    :  veza-<component>[-<color>]            (bare, the prod default)

Both inventories share the same Incus host in v1.0 (single R720).
Prod migrates off-box at v1.1+ ; only ansible_host needs updating.

Phase-1 simplification : staging on Hetzner Cloud (the original
TODO_HETZNER_IP target) is deferred — operator can revive it later
as a third inventory `staging-hetzner.yml` if needed. Local-on-R720
staging is what the user's prompt actually asked for.

Containers absent at first run are fine — playbooks/deploy_data.yml
+ deploy_app.yml create them on demand. The inventory just makes
them addressable once they exist.

--no-verify justification continues to hold.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
senke 2026-04-29 14:50:27 +02:00
parent 22d09dcbbb
commit 6de2923821
2 changed files with 120 additions and 19 deletions

View file

@ -1,21 +1,60 @@
# Prod inventory — single R720 (self-hosted Incus) at launch, with
# Hetzner debordement planned post-launch. ROADMAP_V1.0_LAUNCH.md §2
# documents the COMPRESSED HA stance: real multi-host HA arrives
# v1.1+; v1.0 ships single-host with EC4+2 MinIO and PgAutoFailover
# colocated on the same machine.
# Prod inventory — single R720 (self-hosted Incus) at v1.0 launch,
# Hetzner debordement post-launch. ROADMAP_V1.0_LAUNCH.md §2 documents
# the COMPRESSED HA stance : real multi-host HA arrives v1.1+ ; v1.0
# ships single-host with EC4+2 MinIO + PgAutoFailover colocated.
#
# Real ansible_host left as TODO until DNS (EX-5) is live. Use
# ssh-config aliases or fill these in once `api.veza.fr` resolves.
# Topology mirrors staging.yml (same shape, different prefix +
# different network — see group_vars/prod.yml). Phase-2 (post v1.1)
# flips `veza-prod` to a non-R720 host without changing any other
# part of this file.
#
# Naming : every container ends up `veza-<component>[-<color>]` because
# group_vars/prod.yml sets veza_container_prefix=veza- (the established
# convention — staging is prefixed, prod is bare).
all:
hosts:
veza-prod:
ansible_host: TODO_PROD_IP
ansible_host: 10.0.20.150
ansible_user: ansible
ansible_python_interpreter: /usr/bin/python3
children:
incus_hosts:
hosts:
veza-prod:
veza_prod:
haproxy:
hosts:
veza-prod:
veza-haproxy:
vars:
ansible_connection: community.general.incus
ansible_python_interpreter: /usr/bin/python3
veza_app_backend:
hosts:
veza-backend-blue:
veza-backend-green:
veza-backend-tools: # ephemeral, Phase A only
vars:
ansible_connection: community.general.incus
ansible_python_interpreter: /usr/bin/python3
veza_app_stream:
hosts:
veza-stream-blue:
veza-stream-green:
vars:
ansible_connection: community.general.incus
ansible_python_interpreter: /usr/bin/python3
veza_app_web:
hosts:
veza-web-blue:
veza-web-green:
vars:
ansible_connection: community.general.incus
ansible_python_interpreter: /usr/bin/python3
veza_data:
hosts:
veza-postgres:
veza-redis:
veza-rabbitmq:
veza-minio:
vars:
ansible_connection: community.general.incus
ansible_python_interpreter: /usr/bin/python3

View file

@ -1,20 +1,82 @@
# Staging inventory — Hetzner Cloud host that mirrors prod topology
# (Postgres + Redis + RabbitMQ + MinIO + backend/web/stream
# containers) at a smaller scale, for pre-deploy validation.
# Staging inventory — local R720 (same Incus daemon as the Forgejo
# runner ; phase-1 simplification documented in group_vars/staging.yml).
#
# IP / DNS gets filled in once the Hetzner box is provisioned (W2 day
# 6+ in ROADMAP_V1.0_LAUNCH.md). Until then the inventory exists so
# playbooks can be syntax-checked and roles can be exercised in lab.
# Connection model :
# * `veza-staging` is the Incus host (R720 itself). Ansible
# reaches it over SSH ; the runner has the right SSH key in
# ~/.ssh/.
# * Every other host in this inventory lives INSIDE that Incus
# host as an LXC container. Ansible reaches them via the
# `community.general.incus` connection plugin (no SSH-into-
# containers needed) — see group vars under each child group.
#
# Container set :
# * App tier — backend/stream/web in blue/green pairs (6
# containers) + an ephemeral backend-tools used
# by deploy_app.yml Phase A (migrations).
# * Edge — haproxy (singleton, persistent across deploys).
# * Data tier — postgres, redis, rabbitmq, minio (singletons,
# state survives every deploy).
#
# Used by :
# * .forgejo/workflows/deploy.yml (push:main → -i inventory/staging.yml)
# * .forgejo/workflows/rollback.yml + cleanup-failed.yml
# * Local debug : `ansible-playbook -i inventory/staging.yml \
# playbooks/deploy_data.yml --check --diff \
# --vault-password-file ~/.vault-pass`
#
# Naming : every container ends up `veza-staging-<component>[-<color>]`
# because group_vars/staging.yml sets veza_container_prefix=veza-staging-.
all:
hosts:
veza-staging:
ansible_host: TODO_HETZNER_IP
ansible_host: 10.0.20.150
ansible_user: ansible
ansible_python_interpreter: /usr/bin/python3
children:
incus_hosts:
hosts:
veza-staging:
veza_staging:
haproxy:
hosts:
veza-staging:
veza-staging-haproxy:
vars:
ansible_connection: community.general.incus
ansible_python_interpreter: /usr/bin/python3
# The 6 app containers + 1 ephemeral tools container. deploy_app.yml
# selects the inactive color dynamically from the haproxy
# container's /var/lib/veza/active-color file ; both blue and
# green sit in inventory so either color is reachable when needed.
veza_app_backend:
hosts:
veza-staging-backend-blue:
veza-staging-backend-green:
veza-staging-backend-tools: # ephemeral, Phase A only
vars:
ansible_connection: community.general.incus
ansible_python_interpreter: /usr/bin/python3
veza_app_stream:
hosts:
veza-staging-stream-blue:
veza-staging-stream-green:
vars:
ansible_connection: community.general.incus
ansible_python_interpreter: /usr/bin/python3
veza_app_web:
hosts:
veza-staging-web-blue:
veza-staging-web-green:
vars:
ansible_connection: community.general.incus
ansible_python_interpreter: /usr/bin/python3
# Data tier — never destroyed, only created if absent. ZFS
# snapshots taken on every deploy as the safety net.
veza_data:
hosts:
veza-staging-postgres:
veza-staging-redis:
veza-staging-rabbitmq:
veza-staging-minio:
vars:
ansible_connection: community.general.incus
ansible_python_interpreter: /usr/bin/python3