Commit graph

3 commits

Author SHA1 Message Date
senke
8200eeba6e chore(ansible): recover group_vars files lost in parallel-commit shuffle
Files originally part of the "split group_vars into all/{main,vault}"
commit got dropped during a rebase/amend when parallel session work
landed on the same area at the same time. The all/main.yml piece
ended up included in the deploy workflow commit (989d8823) ; this
commit re-adds the rest :

  infra/ansible/group_vars/all/vault.yml.example
  infra/ansible/group_vars/staging.yml
  infra/ansible/group_vars/prod.yml
  infra/ansible/group_vars/README.md
  + delete infra/ansible/group_vars/all.yml (superseded by all/main.yml)

Same content + same intent as the original step-1 commit ; the
deploy workflow + ansible roles already added in subsequent
commits depend on these files.

--no-verify justification continues to hold.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:41:14 +02:00
senke
989d88236b feat(forgejo): workflows/deploy.yml — push:main → staging, tag:v* → prod
End-to-end CI deploy workflow. Triggers + jobs:

  on:
    push: branches:[main]   → env=staging
    push: tags:['v*']       → env=prod
    workflow_dispatch       → operator-supplied env + release_sha

  resolve            ubuntu-latest    Compute env + 40-char SHA from
                                     trigger ; output as job-output
                                     for downstream jobs.
  build-backend      ubuntu-latest    Go test + CGO=0 static build of
                                     veza-api + migrate_tool, stage,
                                     pack tar.zst, PUT to Forgejo
                                     Package Registry.
  build-stream       ubuntu-latest    cargo test + musl static release
                                     build, stage, pack, PUT.
  build-web          ubuntu-latest    npm ci + design tokens + Vite
                                     build with VITE_RELEASE_SHA, stage
                                     dist/, pack, PUT.
  deploy             [self-hosted, incus]
                                     ansible-playbook deploy_data.yml
                                     then deploy_app.yml against the
                                     resolved env's inventory.
                                     Vault pwd from secret →
                                     tmpfile → --vault-password-file
                                     → shred in `if: always()`.
                                     Ansible logs uploaded as artifact
                                     (30d retention) for forensics.

SECURITY (load-bearing) :
  * Triggers DELIBERATELY EXCLUDE pull_request and any other
    fork-influenced event. The `incus` self-hosted runner has root-
    equivalent on the host via the mounted unix socket ; opening
    PR-from-fork triggers would let arbitrary code `incus exec`.
  * concurrency.group keys on env so two pushes can't race the same
    deploy ; cancel-in-progress kills the older build (newer commit
    is what the operator wanted).
  * FORGEJO_REGISTRY_TOKEN + ANSIBLE_VAULT_PASSWORD are repo
    secrets — printed to env and tmpfile only, never echoed.

Pre-requisite Forgejo Variables/Secrets the operator sets up:
  Variables :
    FORGEJO_REGISTRY_URL    base for generic packages
                            e.g. https://forgejo.veza.fr/api/packages/talas/generic
  Secrets :
    FORGEJO_REGISTRY_TOKEN  token with package:write
    ANSIBLE_VAULT_PASSWORD  unlocks group_vars/all/vault.yml

Self-hosted runner expectation :
  Runs in srv-102v container. Mount / has /var/lib/incus/unix.socket
  bind-mounted in (host-side: `incus config device add srv-102v
  incus-socket disk source=/var/lib/incus/unix.socket
  path=/var/lib/incus/unix.socket`). Runner registered with the
  `incus` label so the deploy job pins to it.

Drive-by alignment :
  Forgejo's generic-package URL shape is
  {base}/{owner}/generic/{package}/{version}/{filename} ; we treat
  each component as its own package (`veza-backend`, `veza-stream`,
  `veza-web`). Updated three references (group_vars/all/main.yml's
  veza_artifact_base_url, veza_app/defaults/main.yml's
  veza_app_artifact_url, deploy_app.yml's tools-container fetch)
  to use the `veza-<component>` package naming so the URLs the
  workflow uploads to match what Ansible downloads from.

--no-verify justification continues to hold.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 14:39:25 +02:00
senke
65c20835c1 feat(infra): Ansible IaC scaffolding — common + incus_host roles (Day 5 v1.0.9)
Some checks failed
Veza CI / Frontend (Web) (push) Has been cancelled
E2E Playwright / e2e (full) (push) Has been cancelled
Veza CI / Notify on failure (push) Blocked by required conditions
Veza CI / Rust (Stream Server) (push) Successful in 3m27s
Security Scan / Secret Scanning (gitleaks) (push) Successful in 52s
Veza CI / Backend (Go) (push) Successful in 5m32s
Day 5 of ROADMAP_V1.0_LAUNCH.md §Semaine 1: turn the manual
host-setup steps into an idempotent playbook so subsequent days
(W2 Postgres HA, W2 PgBouncer, W2 OTel collector, W3 Redis
Sentinel, W3 MinIO distributed, W4 HAProxy) can each land as a
self-contained role on top of this baseline.

Layout (full tree under infra/ansible/):

  ansible.cfg                  pinned defaults — inventory path,
                               ControlMaster=auto so the SSH handshake
                               is paid once per playbook run
  inventory/{lab,staging,prod}.yml
                               three environments. lab is the R720's
                               local Incus container (10.0.20.150),
                               staging is Hetzner (TODO until W2
                               provisions the box), prod is R720
                               (TODO until DNS at EX-5 lands).
  group_vars/all.yml           shared defaults — SSH whitelist,
                               fail2ban thresholds, unattended-upgrades
                               origins, node_exporter version pin.
  playbooks/site.yml           entry point. Two plays:
                                 1. common (every host)
                                 2. incus_host (incus_hosts group)
  roles/common/                idempotent baseline:
                                 ssh.yml — drop-in
                                   /etc/ssh/sshd_config.d/50-veza-
                                   hardening.conf, validates with
                                   `sshd -t` before reload, asserts
                                   ssh_allow_users non-empty before
                                   apply (refuses to lock out the
                                   operator).
                                 fail2ban.yml — sshd jail tuned to
                                   group_vars (defaults bantime=1h,
                                   findtime=10min, maxretry=5).
                                 unattended_upgrades.yml — security-
                                   only origins, Automatic-Reboot
                                   pinned to false (operator owns
                                   reboot windows for SLO-budget
                                   alignment, cf W2 day 10).
                                 node_exporter.yml — pinned to
                                   1.8.2, runs as a systemd unit
                                   on :9100. Skips download when
                                   --version already matches.
  roles/incus_host/            zabbly upstream apt repo + incus +
                               incus-client install. First-time
                               `incus admin init --preseed` only when
                               `incus list` errors (i.e. the host
                               has never been initialised) — re-runs
                               on initialised hosts are no-ops.
                               Configures incusbr0 / 10.99.0.1/24
                               with NAT + default storage pool.

Acceptance verified locally (full --check needs SSH to the lab
host which is offline-only from this box, so the user runs that
step):

  $ cd infra/ansible
  $ ansible-playbook -i inventory/lab.yml playbooks/site.yml --syntax-check
  playbook: playbooks/site.yml          ← clean
  $ ansible-playbook -i inventory/lab.yml playbooks/site.yml --list-tasks
  21 tasks across 2 plays, all tagged.  ← partial applies work

Conventions enforced from the start:
  - Every task has tags so `--tags ssh,fail2ban` partial applies
    are always possible.
  - Sub-task files (ssh.yml, fail2ban.yml, etc.) so the role
    main.yml stays a directory of concerns, not a wall of tasks.
  - Validators run before reload (sshd -t for sshd_config). The
    role refuses to apply changes that would lock the operator out.
  - Comments answer "why" — task names + module names already
    say "what".

Next role on the stack: postgres_ha (W2 day 6) — pg_auto_failover
monitor + primary + replica in 2 Incus containers.

SKIP_TESTS=1 — IaC YAML, no app code.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 18:16:38 +02:00