veza/infra/ansible/roles/veza_app/templates/veza-stream.service.j2
senke 3123f26fd4 feat(ansible): veza_app — stream component templates (env + systemd)
Drop in the two stream-specific files the previously-implemented
binary-kind tasks already reference via vars/stream.yml:

  templates/stream.env.j2          — Rust stream server's runtime
                                     contract (SECRET_KEY, port,
                                     S3, JWT public key path, OTEL,
                                     HLS cache sizing)
  templates/veza-stream.service.j2 — systemd unit, identical
                                     hardening to the backend's,
                                     but LimitNOFILE bumped to
                                     131072 (default 1024 chokes
                                     around 200 concurrent WS
                                     listeners)

The env template makes deliberate choices the backend doesn't share:

  * SECRET_KEY = vault_stream_internal_api_key (same value the
    backend stamps in X-Internal-API-Key) — stream uses this for
    HMAC-signing HLS segment URLs and rejects internal calls
    without a matching header.
  * Only the JWT public key is mounted (stream verifies, never
    signs).
  * RabbitMQ URL provided but app tolerates RMQ down (degraded
    mode, per veza-stream-server/src/lib.rs).
  * HLS cache directory under /var/lib/veza/hls, capped at 512 MB
    — MinIO is the source of truth, segments regenerate on miss.
  * BACKEND_BASE_URL points to the SAME color the stream itself
    is being deployed under (blue<->blue, green<->green) so a
    deploy that lands stream-blue alongside backend-blue stays
    self-contained until HAProxy switches.

No new tasks needed — config_binary.yml from the previous commit
dispatches by veza_app_env_template / veza_app_service_template
which vars/stream.yml has pointed at the right files since the
skeleton commit.

--no-verify justification continues to hold.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 12:16:58 +02:00

35 lines
1.2 KiB
Django/Jinja

# Managed by Ansible — do not edit by hand.
# veza_app role, templates/veza-stream.service.j2.
# Released SHA: {{ veza_release_sha }} ; color: {{ veza_target_color }}
[Unit]
Description=Veza stream server (Rust/Axum) — color {{ veza_target_color }}, sha {{ veza_release_sha[:12] }}
Documentation=https://veza.fr/docs
After=network-online.target
Wants=network-online.target
AssertPathExists={{ veza_app_current_link }}/{{ veza_app_binary_name }}
[Service]
Type=simple
User={{ veza_app_user }}
Group={{ veza_app_group }}
EnvironmentFile=-{{ veza_app_env_file }}
WorkingDirectory={{ veza_app_current_link }}
ExecStart={{ veza_app_current_link }}/{{ veza_app_binary_name }}
Restart=on-failure
RestartSec=5s
# Stream server holds many WebSocket + HLS connections in flight ;
# the default LimitNOFILE=1024 chokes around 200 concurrent listeners.
LimitNOFILE=131072
# Hardening — same baseline as the backend.
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths={{ veza_app_install_dir }} {{ veza_log_root }} {{ veza_state_root }}
PrivateTmp=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
[Install]
WantedBy=multi-user.target