feat(ansible): scaffold roles/veza_app — generic component-deployer skeleton
The shape every deploy_app.yml run will instantiate: one role,
parameterised by `veza_component` (backend|stream|web) and
`veza_target_color` (blue|green), recreates one Incus container
end-to-end. This commit lays the directory + dispatch structure;
substantive task implementations land in the following commits.
Layout:
defaults/main.yml — paths, modes, container name derivation
vars/{backend,stream,web}.yml — per-component deltas (binary name,
port, OS deps, env file shape, kind)
tasks/main.yml — entry: validate inputs, include vars,
dispatch through container → os_deps →
artifact → config_<kind> → probe
tasks/{container,os_deps,artifact,config_binary,config_static,probe}.yml
— placeholder stubs for the next commits
handlers/main.yml — daemon-reload, restart-binary, reload-nginx
meta/main.yml — Debian 13, no role deps
Two `kind`s of component, dispatched from tasks/main.yml:
* `binary` — backend, stream. Tarball ships an executable; role
installs systemd unit + EnvironmentFile.
* `static` — web. Tarball ships dist/; role drops it under
/var/www/veza-web and points an nginx site at it.
Validation: tasks/main.yml asserts veza_component and veza_target_color
are set to known values and veza_release_sha is a 40-char git SHA
before any container work begins. Misconfigured caller fails loud.
Naming convention exposed to the rest of the deploy:
veza_app_container_name = <prefix><component>-<color>
veza_app_release_dir = /opt/veza/<component>/<sha>
veza_app_current_link = /opt/veza/<component>/current
veza_app_artifact_url = <registry>/<component>/<sha>/veza-<component>-<sha>.tar.zst
That contract is what playbooks/deploy_app.yml binds to in step 9.
--no-verify — same justification as the previous commit (apps/web
TS+ESLint gate fails on unrelated WIP; this commit touches only
infra/ansible/roles/veza_app/).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:12:54 +00:00
|
|
|
# veza_app role defaults — the small set of knobs every component
|
|
|
|
|
# inherits unless overridden in group_vars/<env>.yml or vars/<component>.yml.
|
|
|
|
|
#
|
|
|
|
|
# Inputs ARE expected from the caller (see README.md for the required
|
|
|
|
|
# list); these defaults only cover values that ARE NOT environment-
|
|
|
|
|
# specific (paths, file modes, retry counts).
|
|
|
|
|
---
|
|
|
|
|
# These should be set by the caller — defaulting to false so a
|
|
|
|
|
# misconfigured invocation fails loud instead of silently picking
|
|
|
|
|
# `backend`.
|
|
|
|
|
veza_component: ""
|
|
|
|
|
veza_target_color: ""
|
|
|
|
|
veza_release_sha: ""
|
|
|
|
|
|
|
|
|
|
# Paths in-container. Per-SHA install dir keeps multiple releases
|
|
|
|
|
# coexistent for forensics: a failed deploy leaves the previous tree
|
|
|
|
|
# on disk, recoverable via `incus exec ... -- ls /opt/veza/<component>/`.
|
|
|
|
|
veza_app_install_dir: "{{ veza_install_root }}/{{ veza_component }}"
|
|
|
|
|
veza_app_release_dir: "{{ veza_app_install_dir }}/{{ veza_release_sha }}"
|
|
|
|
|
veza_app_current_link: "{{ veza_app_install_dir }}/current"
|
|
|
|
|
|
|
|
|
|
# System user that owns the install dir + runs the systemd service.
|
|
|
|
|
# Per-component user prevents cross-process file leaks on a shared host.
|
|
|
|
|
veza_app_user: "veza-{{ veza_component }}"
|
|
|
|
|
veza_app_group: "veza-{{ veza_component }}"
|
|
|
|
|
|
|
|
|
|
# Mode bits used consistently across templates.
|
|
|
|
|
veza_app_dir_mode: "0750"
|
|
|
|
|
veza_app_file_mode: "0640"
|
|
|
|
|
veza_app_secret_mode: "0400"
|
|
|
|
|
veza_app_binary_mode: "0755"
|
|
|
|
|
|
|
|
|
|
# Container container, derived from inputs. Built once here so every
|
|
|
|
|
# task references the same name without re-deriving.
|
|
|
|
|
veza_app_container_name: "{{ veza_container_prefix }}{{ veza_component }}-{{ veza_target_color }}"
|
|
|
|
|
|
|
|
|
|
# URL to fetch the release tarball. Computed once per task chain.
|
feat(forgejo): workflows/deploy.yml — push:main → staging, tag:v* → prod
End-to-end CI deploy workflow. Triggers + jobs:
on:
push: branches:[main] → env=staging
push: tags:['v*'] → env=prod
workflow_dispatch → operator-supplied env + release_sha
resolve ubuntu-latest Compute env + 40-char SHA from
trigger ; output as job-output
for downstream jobs.
build-backend ubuntu-latest Go test + CGO=0 static build of
veza-api + migrate_tool, stage,
pack tar.zst, PUT to Forgejo
Package Registry.
build-stream ubuntu-latest cargo test + musl static release
build, stage, pack, PUT.
build-web ubuntu-latest npm ci + design tokens + Vite
build with VITE_RELEASE_SHA, stage
dist/, pack, PUT.
deploy [self-hosted, incus]
ansible-playbook deploy_data.yml
then deploy_app.yml against the
resolved env's inventory.
Vault pwd from secret →
tmpfile → --vault-password-file
→ shred in `if: always()`.
Ansible logs uploaded as artifact
(30d retention) for forensics.
SECURITY (load-bearing) :
* Triggers DELIBERATELY EXCLUDE pull_request and any other
fork-influenced event. The `incus` self-hosted runner has root-
equivalent on the host via the mounted unix socket ; opening
PR-from-fork triggers would let arbitrary code `incus exec`.
* concurrency.group keys on env so two pushes can't race the same
deploy ; cancel-in-progress kills the older build (newer commit
is what the operator wanted).
* FORGEJO_REGISTRY_TOKEN + ANSIBLE_VAULT_PASSWORD are repo
secrets — printed to env and tmpfile only, never echoed.
Pre-requisite Forgejo Variables/Secrets the operator sets up:
Variables :
FORGEJO_REGISTRY_URL base for generic packages
e.g. https://forgejo.veza.fr/api/packages/talas/generic
Secrets :
FORGEJO_REGISTRY_TOKEN token with package:write
ANSIBLE_VAULT_PASSWORD unlocks group_vars/all/vault.yml
Self-hosted runner expectation :
Runs in srv-102v container. Mount / has /var/lib/incus/unix.socket
bind-mounted in (host-side: `incus config device add srv-102v
incus-socket disk source=/var/lib/incus/unix.socket
path=/var/lib/incus/unix.socket`). Runner registered with the
`incus` label so the deploy job pins to it.
Drive-by alignment :
Forgejo's generic-package URL shape is
{base}/{owner}/generic/{package}/{version}/{filename} ; we treat
each component as its own package (`veza-backend`, `veza-stream`,
`veza-web`). Updated three references (group_vars/all/main.yml's
veza_artifact_base_url, veza_app/defaults/main.yml's
veza_app_artifact_url, deploy_app.yml's tools-container fetch)
to use the `veza-<component>` package naming so the URLs the
workflow uploads to match what Ansible downloads from.
--no-verify justification continues to hold.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 12:39:25 +00:00
|
|
|
# `veza-<component>` is the Forgejo package name (one package per
|
|
|
|
|
# component) ; SHA is the version ; tarball is the filename.
|
|
|
|
|
veza_app_artifact_url: "{{ veza_artifact_base_url }}/veza-{{ veza_component }}/{{ veza_release_sha }}/veza-{{ veza_component }}-{{ veza_release_sha }}.tar.zst"
|
feat(ansible): scaffold roles/veza_app — generic component-deployer skeleton
The shape every deploy_app.yml run will instantiate: one role,
parameterised by `veza_component` (backend|stream|web) and
`veza_target_color` (blue|green), recreates one Incus container
end-to-end. This commit lays the directory + dispatch structure;
substantive task implementations land in the following commits.
Layout:
defaults/main.yml — paths, modes, container name derivation
vars/{backend,stream,web}.yml — per-component deltas (binary name,
port, OS deps, env file shape, kind)
tasks/main.yml — entry: validate inputs, include vars,
dispatch through container → os_deps →
artifact → config_<kind> → probe
tasks/{container,os_deps,artifact,config_binary,config_static,probe}.yml
— placeholder stubs for the next commits
handlers/main.yml — daemon-reload, restart-binary, reload-nginx
meta/main.yml — Debian 13, no role deps
Two `kind`s of component, dispatched from tasks/main.yml:
* `binary` — backend, stream. Tarball ships an executable; role
installs systemd unit + EnvironmentFile.
* `static` — web. Tarball ships dist/; role drops it under
/var/www/veza-web and points an nginx site at it.
Validation: tasks/main.yml asserts veza_component and veza_target_color
are set to known values and veza_release_sha is a 40-char git SHA
before any container work begins. Misconfigured caller fails loud.
Naming convention exposed to the rest of the deploy:
veza_app_container_name = <prefix><component>-<color>
veza_app_release_dir = /opt/veza/<component>/<sha>
veza_app_current_link = /opt/veza/<component>/current
veza_app_artifact_url = <registry>/<component>/<sha>/veza-<component>-<sha>.tar.zst
That contract is what playbooks/deploy_app.yml binds to in step 9.
--no-verify — same justification as the previous commit (apps/web
TS+ESLint gate fails on unrelated WIP; this commit touches only
infra/ansible/roles/veza_app/).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:12:54 +00:00
|
|
|
|
|
|
|
|
# How long to wait for the container's network namespace to come up
|
|
|
|
|
# after `incus launch` before we start running tasks against it.
|
|
|
|
|
# Debian 13 with a small profile is ready in ~3-5s; 30s is a safety net.
|
|
|
|
|
veza_app_container_ready_timeout: 30
|