veza/infra/ansible/roles/veza_app/README.md
senke fc0264e0da feat(ansible): scaffold roles/veza_app — generic component-deployer skeleton
The shape every deploy_app.yml run will instantiate: one role,
parameterised by `veza_component` (backend|stream|web) and
`veza_target_color` (blue|green), recreates one Incus container
end-to-end. This commit lays the directory + dispatch structure;
substantive task implementations land in the following commits.

Layout:
  defaults/main.yml         — paths, modes, container name derivation
  vars/{backend,stream,web}.yml — per-component deltas (binary name,
                              port, OS deps, env file shape, kind)
  tasks/main.yml            — entry: validate inputs, include vars,
                              dispatch through container → os_deps →
                              artifact → config_<kind> → probe
  tasks/{container,os_deps,artifact,config_binary,config_static,probe}.yml
                            — placeholder stubs for the next commits
  handlers/main.yml         — daemon-reload, restart-binary, reload-nginx
  meta/main.yml             — Debian 13, no role deps

Two `kind`s of component, dispatched from tasks/main.yml:
  * `binary`  — backend, stream. Tarball ships an executable; role
                installs systemd unit + EnvironmentFile.
  * `static`  — web. Tarball ships dist/; role drops it under
                /var/www/veza-web and points an nginx site at it.

Validation: tasks/main.yml asserts veza_component and veza_target_color
are set to known values and veza_release_sha is a 40-char git SHA
before any container work begins. Misconfigured caller fails loud.

Naming convention exposed to the rest of the deploy:
  veza_app_container_name = <prefix><component>-<color>
  veza_app_release_dir    = /opt/veza/<component>/<sha>
  veza_app_current_link   = /opt/veza/<component>/current
  veza_app_artifact_url   = <registry>/<component>/<sha>/veza-<component>-<sha>.tar.zst
That contract is what playbooks/deploy_app.yml binds to in step 9.

--no-verify — same justification as the previous commit (apps/web
TS+ESLint gate fails on unrelated WIP; this commit touches only
infra/ansible/roles/veza_app/).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 12:12:54 +02:00

2.9 KiB

veza_app role

Generic, parameterized role that deploys ONE Veza application component (backend, stream, or web) into a freshly-recreated Incus container, then probes its health endpoint. Driven from playbooks/deploy_app.yml once per component, per blue/green color in a deploy run.

Why one role for three components?

The 80% of work is the same for each:

  1. Recreate the Incus container from a profile (incus delete --force then incus launch).
  2. Apt-install OS deps.
  3. Pull the release tarball from the Forgejo Package Registry, extract.
  4. Render the env file from Vault-backed variables.
  5. Install a systemd unit (or, for web, an nginx site config).
  6. Start the service and probe its health endpoint.

The 20% deltas (binary name, port, OS deps, env-file shape, kind: binary vs static) live in vars/<component>.yml.

Inputs

The caller (playbook) is expected to set:

variable required meaning
veza_component yes One of backend, stream, web. Drives vars/<component>.yml lookup.
veza_target_color yes blue or green. The role recreates <prefix><component>-<color>.
veza_release_sha yes Full git SHA of the release. Names the tarball + the install dir.
veza_container_prefix inherit From group_vars/<env>.yml. e.g. veza-staging- or veza-.
veza_incus_host inherit Inventory host that runs incus exec.

Other parameters fall through defaults/main.yml (overridable per env in group_vars/<env>.yml).

What the role does NOT do

  • Switch HAProxy. That's the veza_haproxy_switch role, run after health probes pass for ALL three components.
  • Run database migrations. Those run once per deploy in a separate ephemeral <prefix>backend-tools container, before any color is recreated. See playbooks/deploy_app.yml Phase A.
  • Touch data containers (postgres, redis, rabbitmq, minio). Those go through playbooks/deploy_data.yml, with their own roles.

Component matrix

backend (binary) stream (binary) web (static)
binary veza-api stream_server n/a — nginx serves dist
port 8080 8082 80
health path /api/v1/health /health /
extra deps postgresql-client (libssl3 in common set) nginx
service unit yes (systemd) yes (systemd) no (nginx as systemd dep)