End-to-end CI deploy workflow. Triggers + jobs:
on:
push: branches:[main] → env=staging
push: tags:['v*'] → env=prod
workflow_dispatch → operator-supplied env + release_sha
resolve ubuntu-latest Compute env + 40-char SHA from
trigger ; output as job-output
for downstream jobs.
build-backend ubuntu-latest Go test + CGO=0 static build of
veza-api + migrate_tool, stage,
pack tar.zst, PUT to Forgejo
Package Registry.
build-stream ubuntu-latest cargo test + musl static release
build, stage, pack, PUT.
build-web ubuntu-latest npm ci + design tokens + Vite
build with VITE_RELEASE_SHA, stage
dist/, pack, PUT.
deploy [self-hosted, incus]
ansible-playbook deploy_data.yml
then deploy_app.yml against the
resolved env's inventory.
Vault pwd from secret →
tmpfile → --vault-password-file
→ shred in `if: always()`.
Ansible logs uploaded as artifact
(30d retention) for forensics.
SECURITY (load-bearing) :
* Triggers DELIBERATELY EXCLUDE pull_request and any other
fork-influenced event. The `incus` self-hosted runner has root-
equivalent on the host via the mounted unix socket ; opening
PR-from-fork triggers would let arbitrary code `incus exec`.
* concurrency.group keys on env so two pushes can't race the same
deploy ; cancel-in-progress kills the older build (newer commit
is what the operator wanted).
* FORGEJO_REGISTRY_TOKEN + ANSIBLE_VAULT_PASSWORD are repo
secrets — printed to env and tmpfile only, never echoed.
Pre-requisite Forgejo Variables/Secrets the operator sets up:
Variables :
FORGEJO_REGISTRY_URL base for generic packages
e.g. https://forgejo.veza.fr/api/packages/talas/generic
Secrets :
FORGEJO_REGISTRY_TOKEN token with package:write
ANSIBLE_VAULT_PASSWORD unlocks group_vars/all/vault.yml
Self-hosted runner expectation :
Runs in srv-102v container. Mount / has /var/lib/incus/unix.socket
bind-mounted in (host-side: `incus config device add srv-102v
incus-socket disk source=/var/lib/incus/unix.socket
path=/var/lib/incus/unix.socket`). Runner registered with the
`incus` label so the deploy job pins to it.
Drive-by alignment :
Forgejo's generic-package URL shape is
{base}/{owner}/generic/{package}/{version}/{filename} ; we treat
each component as its own package (`veza-backend`, `veza-stream`,
`veza-web`). Updated three references (group_vars/all/main.yml's
veza_artifact_base_url, veza_app/defaults/main.yml's
veza_app_artifact_url, deploy_app.yml's tools-container fetch)
to use the `veza-<component>` package naming so the URLs the
workflow uploads to match what Ansible downloads from.
--no-verify justification continues to hold.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
90 lines
3.5 KiB
YAML
90 lines
3.5 KiB
YAML
# Shared defaults across every inventory (lab/staging/prod). Override
|
|
# per-environment in `group_vars/<group>.yml` or per-host in
|
|
# `host_vars/<host>.yml`.
|
|
---
|
|
# Owner contact (used in some unattended-upgrades + monitoring agent configs).
|
|
veza_ops_email: ops@veza.fr
|
|
|
|
# v1.0.9 Day 5: SSH hardening surface that the `common` role enforces.
|
|
# Override these in production via group_vars/veza_prod.yml when the
|
|
# bastion's specific port / allowed users are decided. Defaults are
|
|
# safe for lab.
|
|
ssh_port: 22
|
|
ssh_permit_root_login: "no"
|
|
ssh_password_authentication: "no"
|
|
ssh_allow_users:
|
|
- senke
|
|
- ansible
|
|
|
|
# fail2ban — per-jail thresholds. The defaults are conservative for
|
|
# a self-hosted single-machine deployment; production may want
|
|
# lower findtime / higher bantime once Forgejo + Veza traffic is
|
|
# baselined.
|
|
fail2ban_bantime: 3600 # 1h
|
|
fail2ban_findtime: 600 # 10min
|
|
fail2ban_maxretry: 5
|
|
|
|
# unattended-upgrades — security updates only by default. The role
|
|
# never enables auto-reboot; ROADMAP_V1.0_LAUNCH.md §5 game day pins
|
|
# downtime windows to controlled cycles, not OS-driven reboots.
|
|
unattended_upgrades_origins:
|
|
- "${distro_id}:${distro_codename}-security"
|
|
- "${distro_id}ESMApps:${distro_codename}-apps-security"
|
|
- "${distro_id}ESM:${distro_codename}-infra-security"
|
|
unattended_upgrades_auto_reboot: false
|
|
|
|
# Monitoring agent: prometheus node_exporter is the bare-minimum
|
|
# host metrics surface (CPU / memory / disk / network). The
|
|
# observability stack (Tempo + Loki + Grafana) lands W2 in roadmap.
|
|
monitoring_node_exporter_version: "1.8.2"
|
|
monitoring_node_exporter_port: 9100
|
|
|
|
# ============================================================
|
|
# Veza app deploy — defaults shared by every environment.
|
|
# Each can be overridden in group_vars/{staging,prod}.yml.
|
|
# ============================================================
|
|
|
|
# Forgejo Package Registry where the deploy workflow pushes release
|
|
# tarballs. Forgejo's generic-package URL shape is:
|
|
# {base}/{owner}/generic/{package}/{version}/{filename}
|
|
# We treat each component as a separate package (`veza-backend`,
|
|
# `veza-stream`, `veza-web`), the SHA as the version, and the
|
|
# tarball name as the filename. Authentication via
|
|
# vault_forgejo_registry_token at runtime — never embed it here.
|
|
veza_artifact_base_url: "https://forgejo.veza.fr/api/packages/talas/generic"
|
|
|
|
# Container image used as the base for fresh app containers. The
|
|
# `veza_app` role apt-installs OS deps on top. Pinned tag keeps deploys
|
|
# reproducible across base-image updates.
|
|
veza_app_base_image: "images:debian/13"
|
|
|
|
# Per-component HTTP ports. Backend listens on `APP_PORT` env var;
|
|
# stream listens on `PORT` env var. Templates render these into env
|
|
# files; HAProxy reads them to wire backends.
|
|
veza_backend_port: 8080
|
|
veza_stream_port: 8082
|
|
veza_web_port: 80
|
|
|
|
# Health probe parameters — used by deploy_app's Phase D and by the
|
|
# rollback playbook when verifying a switched color.
|
|
veza_healthcheck_retries: 30
|
|
veza_healthcheck_delay_seconds: 2
|
|
veza_healthcheck_paths:
|
|
backend: /api/v1/health
|
|
stream: /health
|
|
web: /
|
|
|
|
# OS package set installed in every fresh app container. Component-
|
|
# specific extras live in roles/veza_app/vars/<component>.yml.
|
|
veza_common_os_packages:
|
|
- ca-certificates
|
|
- curl
|
|
- tzdata
|
|
- zstd # to decompress release tarballs
|
|
|
|
# Where artefacts land in-container. Per-SHA subdirs let multiple
|
|
# releases coexist for forensics without conflict.
|
|
veza_install_root: /opt/veza
|
|
veza_config_root: /etc/veza
|
|
veza_log_root: /var/log/veza
|
|
veza_state_root: /var/lib/veza
|