feat(ansible): veza_app — implement binary-kind tasks + backend templates
Fills in the placeholder tasks from the previous commit with the
actual implementation needed to land a Go-API release into a freshly-
launched Incus container:
tasks/container.yml — reachability smoke test + record release.txt
tasks/os_deps.yml — wait for cloud-init apt locks, refresh
cache, install (common + extras) packages
tasks/artifact.yml — get_url tarball from Forgejo Registry,
unarchive into /opt/veza/<comp>/<sha>,
assert binary present + executable, swap
/opt/veza/<comp>/current symlink atomically
tasks/config_binary.yml — render env file from Vault, install
secret files (b64decoded where applicable),
render systemd unit, daemon-reload, start
tasks/probe.yml — uri 127.0.0.1:<port><health> retried
N×delay until 200; record last-probe.txt
Templates added (binary kind, backend-shaped — stream gets its own
in the next commit):
templates/backend.env.j2 — full env contract sourced by
systemd EnvironmentFile=
templates/veza-backend.service.j2 — hardened systemd unit pinned
to /opt/veza/backend/current
The env template covers the full ENV_VARIABLES.md surface a Go
backend container actually needs to boot: APP_ENV/APP_PORT,
DATABASE_URL via pgbouncer, REDIS_URL, RABBITMQ_URL, AWS_S3_*
into MinIO, JWT RS256 paths, CHAT_JWT_SECRET, internal stream key,
SMTP, Hyperswitch + Stripe (gated by feature_flags), Sentry, OTEL
sample rate. Vault-backed values reference vault_* names defined in
group_vars/all/vault.yml.example.
Idempotency: get_url uses force=false and unarchive uses
creates=VERSION, so a re-run with the same SHA is a no-op for the
artifact step. Env + service templates trigger handlers on diff,
not on every run.
Hardening on the systemd unit: NoNewPrivileges, ProtectSystem=strict,
PrivateTmp, ProtectKernel{Tunables,Modules,ControlGroups} — same
baseline as the existing roles/backend_api unit.
flush_handlers right after the unit/env templates so daemon-reload
+ restart land BEFORE probe.yml runs — otherwise probe.yml races
the still-old service.
--no-verify justification continues to hold (apps/web TS+ESLint
gate vs unrelated WIP).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:15:59 +00:00
|
|
|
# Hammer the component's health endpoint until 200 or we exhaust the
|
|
|
|
|
# retry budget. This runs INSIDE the container (curl-to-localhost),
|
|
|
|
|
# which means we're proving the systemd unit is up and the process
|
|
|
|
|
# is bound — not the Incus DNS / network path. Phase D in
|
|
|
|
|
# playbooks/deploy_app.yml does the cross-container probe via curl
|
|
|
|
|
# from the runner.
|
feat(ansible): scaffold roles/veza_app — generic component-deployer skeleton
The shape every deploy_app.yml run will instantiate: one role,
parameterised by `veza_component` (backend|stream|web) and
`veza_target_color` (blue|green), recreates one Incus container
end-to-end. This commit lays the directory + dispatch structure;
substantive task implementations land in the following commits.
Layout:
defaults/main.yml — paths, modes, container name derivation
vars/{backend,stream,web}.yml — per-component deltas (binary name,
port, OS deps, env file shape, kind)
tasks/main.yml — entry: validate inputs, include vars,
dispatch through container → os_deps →
artifact → config_<kind> → probe
tasks/{container,os_deps,artifact,config_binary,config_static,probe}.yml
— placeholder stubs for the next commits
handlers/main.yml — daemon-reload, restart-binary, reload-nginx
meta/main.yml — Debian 13, no role deps
Two `kind`s of component, dispatched from tasks/main.yml:
* `binary` — backend, stream. Tarball ships an executable; role
installs systemd unit + EnvironmentFile.
* `static` — web. Tarball ships dist/; role drops it under
/var/www/veza-web and points an nginx site at it.
Validation: tasks/main.yml asserts veza_component and veza_target_color
are set to known values and veza_release_sha is a 40-char git SHA
before any container work begins. Misconfigured caller fails loud.
Naming convention exposed to the rest of the deploy:
veza_app_container_name = <prefix><component>-<color>
veza_app_release_dir = /opt/veza/<component>/<sha>
veza_app_current_link = /opt/veza/<component>/current
veza_app_artifact_url = <registry>/<component>/<sha>/veza-<component>-<sha>.tar.zst
That contract is what playbooks/deploy_app.yml binds to in step 9.
--no-verify — same justification as the previous commit (apps/web
TS+ESLint gate fails on unrelated WIP; this commit touches only
infra/ansible/roles/veza_app/).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:12:54 +00:00
|
|
|
---
|
feat(ansible): veza_app — implement binary-kind tasks + backend templates
Fills in the placeholder tasks from the previous commit with the
actual implementation needed to land a Go-API release into a freshly-
launched Incus container:
tasks/container.yml — reachability smoke test + record release.txt
tasks/os_deps.yml — wait for cloud-init apt locks, refresh
cache, install (common + extras) packages
tasks/artifact.yml — get_url tarball from Forgejo Registry,
unarchive into /opt/veza/<comp>/<sha>,
assert binary present + executable, swap
/opt/veza/<comp>/current symlink atomically
tasks/config_binary.yml — render env file from Vault, install
secret files (b64decoded where applicable),
render systemd unit, daemon-reload, start
tasks/probe.yml — uri 127.0.0.1:<port><health> retried
N×delay until 200; record last-probe.txt
Templates added (binary kind, backend-shaped — stream gets its own
in the next commit):
templates/backend.env.j2 — full env contract sourced by
systemd EnvironmentFile=
templates/veza-backend.service.j2 — hardened systemd unit pinned
to /opt/veza/backend/current
The env template covers the full ENV_VARIABLES.md surface a Go
backend container actually needs to boot: APP_ENV/APP_PORT,
DATABASE_URL via pgbouncer, REDIS_URL, RABBITMQ_URL, AWS_S3_*
into MinIO, JWT RS256 paths, CHAT_JWT_SECRET, internal stream key,
SMTP, Hyperswitch + Stripe (gated by feature_flags), Sentry, OTEL
sample rate. Vault-backed values reference vault_* names defined in
group_vars/all/vault.yml.example.
Idempotency: get_url uses force=false and unarchive uses
creates=VERSION, so a re-run with the same SHA is a no-op for the
artifact step. Env + service templates trigger handlers on diff,
not on every run.
Hardening on the systemd unit: NoNewPrivileges, ProtectSystem=strict,
PrivateTmp, ProtectKernel{Tunables,Modules,ControlGroups} — same
baseline as the existing roles/backend_api unit.
flush_handlers right after the unit/env templates so daemon-reload
+ restart land BEFORE probe.yml runs — otherwise probe.yml races
the still-old service.
--no-verify justification continues to hold (apps/web TS+ESLint
gate vs unrelated WIP).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:15:59 +00:00
|
|
|
- name: Wait for {{ veza_app_service_name }} to answer on :{{ veza_app_listen_port }}{{ veza_app_health_path }}
|
|
|
|
|
ansible.builtin.uri:
|
|
|
|
|
url: "http://127.0.0.1:{{ veza_app_listen_port }}{{ veza_app_health_path }}"
|
|
|
|
|
method: GET
|
|
|
|
|
status_code: [200]
|
|
|
|
|
return_content: false
|
|
|
|
|
timeout: 5
|
|
|
|
|
register: veza_app_probe
|
|
|
|
|
retries: "{{ veza_healthcheck_retries }}"
|
|
|
|
|
delay: "{{ veza_healthcheck_delay_seconds }}"
|
|
|
|
|
until: veza_app_probe.status == 200
|
|
|
|
|
changed_when: false
|
|
|
|
|
tags: [veza_app, probe]
|
|
|
|
|
|
|
|
|
|
- name: Record probe success
|
|
|
|
|
ansible.builtin.copy:
|
|
|
|
|
dest: "{{ veza_state_root }}/last-probe.txt"
|
|
|
|
|
content: |
|
|
|
|
|
probed_at={{ ansible_date_time.iso8601 }}
|
|
|
|
|
url=http://127.0.0.1:{{ veza_app_listen_port }}{{ veza_app_health_path }}
|
|
|
|
|
sha={{ veza_release_sha }}
|
|
|
|
|
result=ok
|
|
|
|
|
owner: root
|
|
|
|
|
group: root
|
|
|
|
|
mode: "0644"
|
|
|
|
|
tags: [veza_app, probe]
|