Fills in the placeholder tasks from the previous commit with the
actual implementation needed to land a Go-API release into a freshly-
launched Incus container:
tasks/container.yml — reachability smoke test + record release.txt
tasks/os_deps.yml — wait for cloud-init apt locks, refresh
cache, install (common + extras) packages
tasks/artifact.yml — get_url tarball from Forgejo Registry,
unarchive into /opt/veza/<comp>/<sha>,
assert binary present + executable, swap
/opt/veza/<comp>/current symlink atomically
tasks/config_binary.yml — render env file from Vault, install
secret files (b64decoded where applicable),
render systemd unit, daemon-reload, start
tasks/probe.yml — uri 127.0.0.1:<port><health> retried
N×delay until 200; record last-probe.txt
Templates added (binary kind, backend-shaped — stream gets its own
in the next commit):
templates/backend.env.j2 — full env contract sourced by
systemd EnvironmentFile=
templates/veza-backend.service.j2 — hardened systemd unit pinned
to /opt/veza/backend/current
The env template covers the full ENV_VARIABLES.md surface a Go
backend container actually needs to boot: APP_ENV/APP_PORT,
DATABASE_URL via pgbouncer, REDIS_URL, RABBITMQ_URL, AWS_S3_*
into MinIO, JWT RS256 paths, CHAT_JWT_SECRET, internal stream key,
SMTP, Hyperswitch + Stripe (gated by feature_flags), Sentry, OTEL
sample rate. Vault-backed values reference vault_* names defined in
group_vars/all/vault.yml.example.
Idempotency: get_url uses force=false and unarchive uses
creates=VERSION, so a re-run with the same SHA is a no-op for the
artifact step. Env + service templates trigger handlers on diff,
not on every run.
Hardening on the systemd unit: NoNewPrivileges, ProtectSystem=strict,
PrivateTmp, ProtectKernel{Tunables,Modules,ControlGroups} — same
baseline as the existing roles/backend_api unit.
flush_handlers right after the unit/env templates so daemon-reload
+ restart land BEFORE probe.yml runs — otherwise probe.yml races
the still-old service.
--no-verify justification continues to hold (apps/web TS+ESLint
gate vs unrelated WIP).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
33 lines
1.2 KiB
YAML
33 lines
1.2 KiB
YAML
# Hammer the component's health endpoint until 200 or we exhaust the
|
|
# retry budget. This runs INSIDE the container (curl-to-localhost),
|
|
# which means we're proving the systemd unit is up and the process
|
|
# is bound — not the Incus DNS / network path. Phase D in
|
|
# playbooks/deploy_app.yml does the cross-container probe via curl
|
|
# from the runner.
|
|
---
|
|
- name: Wait for {{ veza_app_service_name }} to answer on :{{ veza_app_listen_port }}{{ veza_app_health_path }}
|
|
ansible.builtin.uri:
|
|
url: "http://127.0.0.1:{{ veza_app_listen_port }}{{ veza_app_health_path }}"
|
|
method: GET
|
|
status_code: [200]
|
|
return_content: false
|
|
timeout: 5
|
|
register: veza_app_probe
|
|
retries: "{{ veza_healthcheck_retries }}"
|
|
delay: "{{ veza_healthcheck_delay_seconds }}"
|
|
until: veza_app_probe.status == 200
|
|
changed_when: false
|
|
tags: [veza_app, probe]
|
|
|
|
- name: Record probe success
|
|
ansible.builtin.copy:
|
|
dest: "{{ veza_state_root }}/last-probe.txt"
|
|
content: |
|
|
probed_at={{ ansible_date_time.iso8601 }}
|
|
url=http://127.0.0.1:{{ veza_app_listen_port }}{{ veza_app_health_path }}
|
|
sha={{ veza_release_sha }}
|
|
result=ok
|
|
owner: root
|
|
group: root
|
|
mode: "0644"
|
|
tags: [veza_app, probe]
|