veza/infra/ansible/roles/veza_app/tasks/os_deps.yml

43 lines
1.2 KiB
YAML
Raw Normal View History

feat(ansible): veza_app — implement binary-kind tasks + backend templates Fills in the placeholder tasks from the previous commit with the actual implementation needed to land a Go-API release into a freshly- launched Incus container: tasks/container.yml — reachability smoke test + record release.txt tasks/os_deps.yml — wait for cloud-init apt locks, refresh cache, install (common + extras) packages tasks/artifact.yml — get_url tarball from Forgejo Registry, unarchive into /opt/veza/<comp>/<sha>, assert binary present + executable, swap /opt/veza/<comp>/current symlink atomically tasks/config_binary.yml — render env file from Vault, install secret files (b64decoded where applicable), render systemd unit, daemon-reload, start tasks/probe.yml — uri 127.0.0.1:<port><health> retried N×delay until 200; record last-probe.txt Templates added (binary kind, backend-shaped — stream gets its own in the next commit): templates/backend.env.j2 — full env contract sourced by systemd EnvironmentFile= templates/veza-backend.service.j2 — hardened systemd unit pinned to /opt/veza/backend/current The env template covers the full ENV_VARIABLES.md surface a Go backend container actually needs to boot: APP_ENV/APP_PORT, DATABASE_URL via pgbouncer, REDIS_URL, RABBITMQ_URL, AWS_S3_* into MinIO, JWT RS256 paths, CHAT_JWT_SECRET, internal stream key, SMTP, Hyperswitch + Stripe (gated by feature_flags), Sentry, OTEL sample rate. Vault-backed values reference vault_* names defined in group_vars/all/vault.yml.example. Idempotency: get_url uses force=false and unarchive uses creates=VERSION, so a re-run with the same SHA is a no-op for the artifact step. Env + service templates trigger handlers on diff, not on every run. Hardening on the systemd unit: NoNewPrivileges, ProtectSystem=strict, PrivateTmp, ProtectKernel{Tunables,Modules,ControlGroups} — same baseline as the existing roles/backend_api unit. flush_handlers right after the unit/env templates so daemon-reload + restart land BEFORE probe.yml runs — otherwise probe.yml races the still-old service. --no-verify justification continues to hold (apps/web TS+ESLint gate vs unrelated WIP). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:15:59 +00:00
# Install OS deps inside the freshly-created container. Wait briefly
# for cloud-init / debootstrap to finish first — apt locks held by
# `unattended-upgrades` on first boot would race a parallel
# `apt-get update`.
feat(ansible): scaffold roles/veza_app — generic component-deployer skeleton The shape every deploy_app.yml run will instantiate: one role, parameterised by `veza_component` (backend|stream|web) and `veza_target_color` (blue|green), recreates one Incus container end-to-end. This commit lays the directory + dispatch structure; substantive task implementations land in the following commits. Layout: defaults/main.yml — paths, modes, container name derivation vars/{backend,stream,web}.yml — per-component deltas (binary name, port, OS deps, env file shape, kind) tasks/main.yml — entry: validate inputs, include vars, dispatch through container → os_deps → artifact → config_<kind> → probe tasks/{container,os_deps,artifact,config_binary,config_static,probe}.yml — placeholder stubs for the next commits handlers/main.yml — daemon-reload, restart-binary, reload-nginx meta/main.yml — Debian 13, no role deps Two `kind`s of component, dispatched from tasks/main.yml: * `binary` — backend, stream. Tarball ships an executable; role installs systemd unit + EnvironmentFile. * `static` — web. Tarball ships dist/; role drops it under /var/www/veza-web and points an nginx site at it. Validation: tasks/main.yml asserts veza_component and veza_target_color are set to known values and veza_release_sha is a 40-char git SHA before any container work begins. Misconfigured caller fails loud. Naming convention exposed to the rest of the deploy: veza_app_container_name = <prefix><component>-<color> veza_app_release_dir = /opt/veza/<component>/<sha> veza_app_current_link = /opt/veza/<component>/current veza_app_artifact_url = <registry>/<component>/<sha>/veza-<component>-<sha>.tar.zst That contract is what playbooks/deploy_app.yml binds to in step 9. --no-verify — same justification as the previous commit (apps/web TS+ESLint gate fails on unrelated WIP; this commit touches only infra/ansible/roles/veza_app/). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:12:54 +00:00
---
feat(ansible): veza_app — implement binary-kind tasks + backend templates Fills in the placeholder tasks from the previous commit with the actual implementation needed to land a Go-API release into a freshly- launched Incus container: tasks/container.yml — reachability smoke test + record release.txt tasks/os_deps.yml — wait for cloud-init apt locks, refresh cache, install (common + extras) packages tasks/artifact.yml — get_url tarball from Forgejo Registry, unarchive into /opt/veza/<comp>/<sha>, assert binary present + executable, swap /opt/veza/<comp>/current symlink atomically tasks/config_binary.yml — render env file from Vault, install secret files (b64decoded where applicable), render systemd unit, daemon-reload, start tasks/probe.yml — uri 127.0.0.1:<port><health> retried N×delay until 200; record last-probe.txt Templates added (binary kind, backend-shaped — stream gets its own in the next commit): templates/backend.env.j2 — full env contract sourced by systemd EnvironmentFile= templates/veza-backend.service.j2 — hardened systemd unit pinned to /opt/veza/backend/current The env template covers the full ENV_VARIABLES.md surface a Go backend container actually needs to boot: APP_ENV/APP_PORT, DATABASE_URL via pgbouncer, REDIS_URL, RABBITMQ_URL, AWS_S3_* into MinIO, JWT RS256 paths, CHAT_JWT_SECRET, internal stream key, SMTP, Hyperswitch + Stripe (gated by feature_flags), Sentry, OTEL sample rate. Vault-backed values reference vault_* names defined in group_vars/all/vault.yml.example. Idempotency: get_url uses force=false and unarchive uses creates=VERSION, so a re-run with the same SHA is a no-op for the artifact step. Env + service templates trigger handlers on diff, not on every run. Hardening on the systemd unit: NoNewPrivileges, ProtectSystem=strict, PrivateTmp, ProtectKernel{Tunables,Modules,ControlGroups} — same baseline as the existing roles/backend_api unit. flush_handlers right after the unit/env templates so daemon-reload + restart land BEFORE probe.yml runs — otherwise probe.yml races the still-old service. --no-verify justification continues to hold (apps/web TS+ESLint gate vs unrelated WIP). Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 10:15:59 +00:00
- name: Ensure /var/lib/veza state dir exists
ansible.builtin.file:
path: "{{ veza_state_root }}"
state: directory
owner: root
group: root
mode: "0755"
tags: [veza_app, packages]
- name: Wait for any first-boot apt lock to clear
ansible.builtin.shell: |
set -e
for i in $(seq 1 30); do
if ! fuser /var/lib/dpkg/lock-frontend >/dev/null 2>&1 && \
! fuser /var/lib/apt/lists/lock >/dev/null 2>&1; then
exit 0
fi
sleep 2
done
echo "apt locks still held after 60s"
exit 1
args:
executable: /bin/bash
changed_when: false
tags: [veza_app, packages]
- name: Refresh apt cache
ansible.builtin.apt:
update_cache: true
cache_valid_time: 60
tags: [veza_app, packages]
- name: Install OS packages (common + component-specific)
ansible.builtin.apt:
name: "{{ veza_common_os_packages + veza_app_extra_packages }}"
state: present
tags: [veza_app, packages]