End-to-end CI deploy workflow. Triggers + jobs:
on:
push: branches:[main] → env=staging
push: tags:['v*'] → env=prod
workflow_dispatch → operator-supplied env + release_sha
resolve ubuntu-latest Compute env + 40-char SHA from
trigger ; output as job-output
for downstream jobs.
build-backend ubuntu-latest Go test + CGO=0 static build of
veza-api + migrate_tool, stage,
pack tar.zst, PUT to Forgejo
Package Registry.
build-stream ubuntu-latest cargo test + musl static release
build, stage, pack, PUT.
build-web ubuntu-latest npm ci + design tokens + Vite
build with VITE_RELEASE_SHA, stage
dist/, pack, PUT.
deploy [self-hosted, incus]
ansible-playbook deploy_data.yml
then deploy_app.yml against the
resolved env's inventory.
Vault pwd from secret →
tmpfile → --vault-password-file
→ shred in `if: always()`.
Ansible logs uploaded as artifact
(30d retention) for forensics.
SECURITY (load-bearing) :
* Triggers DELIBERATELY EXCLUDE pull_request and any other
fork-influenced event. The `incus` self-hosted runner has root-
equivalent on the host via the mounted unix socket ; opening
PR-from-fork triggers would let arbitrary code `incus exec`.
* concurrency.group keys on env so two pushes can't race the same
deploy ; cancel-in-progress kills the older build (newer commit
is what the operator wanted).
* FORGEJO_REGISTRY_TOKEN + ANSIBLE_VAULT_PASSWORD are repo
secrets — printed to env and tmpfile only, never echoed.
Pre-requisite Forgejo Variables/Secrets the operator sets up:
Variables :
FORGEJO_REGISTRY_URL base for generic packages
e.g. https://forgejo.veza.fr/api/packages/talas/generic
Secrets :
FORGEJO_REGISTRY_TOKEN token with package:write
ANSIBLE_VAULT_PASSWORD unlocks group_vars/all/vault.yml
Self-hosted runner expectation :
Runs in srv-102v container. Mount / has /var/lib/incus/unix.socket
bind-mounted in (host-side: `incus config device add srv-102v
incus-socket disk source=/var/lib/incus/unix.socket
path=/var/lib/incus/unix.socket`). Runner registered with the
`incus` label so the deploy job pins to it.
Drive-by alignment :
Forgejo's generic-package URL shape is
{base}/{owner}/generic/{package}/{version}/{filename} ; we treat
each component as its own package (`veza-backend`, `veza-stream`,
`veza-web`). Updated three references (group_vars/all/main.yml's
veza_artifact_base_url, veza_app/defaults/main.yml's
veza_app_artifact_url, deploy_app.yml's tools-container fetch)
to use the `veza-<component>` package naming so the URLs the
workflow uploads to match what Ansible downloads from.
--no-verify justification continues to hold.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|---|---|---|
| .. | ||
| defaults | ||
| handlers | ||
| meta | ||
| tasks | ||
| templates | ||
| vars | ||
| README.md | ||
veza_app role
Generic, parameterized role that deploys ONE Veza application
component (backend, stream, or web) into a freshly-recreated
Incus container, then probes its health endpoint. Driven from
playbooks/deploy_app.yml once per component, per blue/green color
in a deploy run.
Why one role for three components?
The 80% of work is the same for each:
- Recreate the Incus container from a profile (
incus delete --forcethenincus launch). - Apt-install OS deps.
- Pull the release tarball from the Forgejo Package Registry, extract.
- Render the env file from Vault-backed variables.
- Install a systemd unit (or, for
web, an nginx site config). - Start the service and probe its health endpoint.
The 20% deltas (binary name, port, OS deps, env-file shape, kind:
binary vs static) live in vars/<component>.yml.
Inputs
The caller (playbook) is expected to set:
| variable | required | meaning |
|---|---|---|
veza_component |
yes | One of backend, stream, web. Drives vars/<component>.yml lookup. |
veza_target_color |
yes | blue or green. The role recreates <prefix><component>-<color>. |
veza_release_sha |
yes | Full git SHA of the release. Names the tarball + the install dir. |
veza_container_prefix |
inherit | From group_vars/<env>.yml. e.g. veza-staging- or veza-. |
veza_incus_host |
inherit | Inventory host that runs incus exec. |
Other parameters fall through defaults/main.yml (overridable per env
in group_vars/<env>.yml).
What the role does NOT do
- Switch HAProxy. That's the
veza_haproxy_switchrole, run after health probes pass for ALL three components. - Run database migrations. Those run once per deploy in a separate
ephemeral
<prefix>backend-toolscontainer, before any color is recreated. Seeplaybooks/deploy_app.ymlPhase A. - Touch data containers (postgres, redis, rabbitmq, minio). Those
go through
playbooks/deploy_data.yml, with their own roles.
Component matrix
| backend (binary) | stream (binary) | web (static) | |
|---|---|---|---|
| binary | veza-api |
stream_server |
n/a — nginx serves dist |
| port | 8080 | 8082 | 80 |
| health path | /api/v1/health |
/health |
/ |
| extra deps | postgresql-client | (libssl3 in common set) | nginx |
| service unit | yes (systemd) | yes (systemd) | no (nginx as systemd dep) |