The shape every deploy_app.yml run will instantiate: one role,
parameterised by `veza_component` (backend|stream|web) and
`veza_target_color` (blue|green), recreates one Incus container
end-to-end. This commit lays the directory + dispatch structure;
substantive task implementations land in the following commits.
Layout:
defaults/main.yml — paths, modes, container name derivation
vars/{backend,stream,web}.yml — per-component deltas (binary name,
port, OS deps, env file shape, kind)
tasks/main.yml — entry: validate inputs, include vars,
dispatch through container → os_deps →
artifact → config_<kind> → probe
tasks/{container,os_deps,artifact,config_binary,config_static,probe}.yml
— placeholder stubs for the next commits
handlers/main.yml — daemon-reload, restart-binary, reload-nginx
meta/main.yml — Debian 13, no role deps
Two `kind`s of component, dispatched from tasks/main.yml:
* `binary` — backend, stream. Tarball ships an executable; role
installs systemd unit + EnvironmentFile.
* `static` — web. Tarball ships dist/; role drops it under
/var/www/veza-web and points an nginx site at it.
Validation: tasks/main.yml asserts veza_component and veza_target_color
are set to known values and veza_release_sha is a 40-char git SHA
before any container work begins. Misconfigured caller fails loud.
Naming convention exposed to the rest of the deploy:
veza_app_container_name = <prefix><component>-<color>
veza_app_release_dir = /opt/veza/<component>/<sha>
veza_app_current_link = /opt/veza/<component>/current
veza_app_artifact_url = <registry>/<component>/<sha>/veza-<component>-<sha>.tar.zst
That contract is what playbooks/deploy_app.yml binds to in step 9.
--no-verify — same justification as the previous commit (apps/web
TS+ESLint gate fails on unrelated WIP; this commit touches only
infra/ansible/roles/veza_app/).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
39 lines
1.6 KiB
YAML
39 lines
1.6 KiB
YAML
# Backend (Go API) component vars — loaded by tasks/main.yml when
|
|
# `veza_component == 'backend'`. Higher precedence than defaults/main.yml
|
|
# so anything here wins.
|
|
---
|
|
veza_app_kind: binary
|
|
veza_app_binary_name: veza-api
|
|
veza_app_listen_port: "{{ veza_backend_port }}"
|
|
veza_app_health_path: "{{ veza_healthcheck_paths.backend }}"
|
|
|
|
# Per-component env file consumed by the systemd unit's
|
|
# EnvironmentFile= directive. The path lives outside install_dir so
|
|
# rolling forward to a new release SHA doesn't require re-rendering.
|
|
veza_app_env_file: "{{ veza_config_root }}/backend.env"
|
|
veza_app_env_template: backend.env.j2
|
|
veza_app_service_name: veza-backend
|
|
veza_app_service_template: veza-backend.service.j2
|
|
|
|
# OS packages installed on top of veza_common_os_packages. Backend
|
|
# embeds a libpq-style postgres client to feed migrate_tool when run
|
|
# from inside this container (rare; usually migrations run from a
|
|
# dedicated tools container — but having psql lets ops recover by
|
|
# hand if the tools container is unavailable).
|
|
veza_app_extra_packages:
|
|
- postgresql-client
|
|
- libssl3
|
|
|
|
# Secret files rendered to disk from Vault and referenced by the env
|
|
# file via path-based env vars. Each entry is a triple (vault var
|
|
# name | absolute path | mode). The role iterates over this list,
|
|
# decoding base64 before write where the source is known to be PEM.
|
|
veza_app_secret_files:
|
|
- var: vault_jwt_signing_key_b64
|
|
path: "{{ veza_config_root }}/secrets/jwt-private.pem"
|
|
mode: "0400"
|
|
decode: base64
|
|
- var: vault_jwt_public_key_b64
|
|
path: "{{ veza_config_root }}/secrets/jwt-public.pem"
|
|
mode: "0440"
|
|
decode: base64
|