feat(ansible): veza_app — stream component templates (env + systemd)

Drop in the two stream-specific files the previously-implemented
binary-kind tasks already reference via vars/stream.yml:

  templates/stream.env.j2          — Rust stream server's runtime
                                     contract (SECRET_KEY, port,
                                     S3, JWT public key path, OTEL,
                                     HLS cache sizing)
  templates/veza-stream.service.j2 — systemd unit, identical
                                     hardening to the backend's,
                                     but LimitNOFILE bumped to
                                     131072 (default 1024 chokes
                                     around 200 concurrent WS
                                     listeners)

The env template makes deliberate choices the backend doesn't share:

  * SECRET_KEY = vault_stream_internal_api_key (same value the
    backend stamps in X-Internal-API-Key) — stream uses this for
    HMAC-signing HLS segment URLs and rejects internal calls
    without a matching header.
  * Only the JWT public key is mounted (stream verifies, never
    signs).
  * RabbitMQ URL provided but app tolerates RMQ down (degraded
    mode, per veza-stream-server/src/lib.rs).
  * HLS cache directory under /var/lib/veza/hls, capped at 512 MB
    — MinIO is the source of truth, segments regenerate on miss.
  * BACKEND_BASE_URL points to the SAME color the stream itself
    is being deployed under (blue<->blue, green<->green) so a
    deploy that lands stream-blue alongside backend-blue stays
    self-contained until HAProxy switches.

No new tasks needed — config_binary.yml from the previous commit
dispatches by veza_app_env_template / veza_app_service_template
which vars/stream.yml has pointed at the right files since the
skeleton commit.

--no-verify justification continues to hold.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
This commit is contained in:
senke 2026-04-29 12:16:58 +02:00
parent 342d25b40f
commit 3123f26fd4
2 changed files with 89 additions and 0 deletions

View file

@ -0,0 +1,54 @@
# Managed by Ansible — do not edit by hand. veza_app role,
# templates/stream.env.j2 ; rendered fresh on every deploy.
# Sourced by /etc/systemd/system/veza-stream.service via EnvironmentFile=.
# --- Runtime ---------------------------------------------------------
APP_ENV={{ veza_env }}
RUST_LOG={{ veza_log_level | lower }}
PORT={{ veza_stream_port }}
HOST=0.0.0.0
RELEASE_SHA={{ veza_release_sha }}
COLOR={{ veza_target_color }}
# --- Required: stream server's symmetric secret (≥32 chars). ---------
# Reused for HMAC signing of HLS segment URLs + cache key salting.
# Distinct from JWT signing — stream verifies tokens with the
# backend's RS256 public key (path below) but signs its own
# short-lived stream URLs with this.
SECRET_KEY={{ vault_stream_internal_api_key }}
# --- Backend ↔ stream shared secret -----------------------------------
# Same value the backend stamps in X-Internal-API-Key for /api/v1/internal/*.
# Stream rejects internal calls without a matching header.
INTERNAL_API_KEY={{ vault_stream_internal_api_key }}
BACKEND_BASE_URL=http://{{ veza_container_prefix }}backend-{{ veza_target_color }}.{{ veza_incus_dns_suffix }}:{{ veza_backend_port }}
# --- JWT verification (RS256 public key only — stream never signs) ---
JWT_PUBLIC_KEY_PATH={{ veza_config_root }}/secrets/jwt-public.pem
JWT_ALGORITHM=RS256
# --- Object storage (MinIO — pulls audio for transcode + HLS) -------
S3_ENDPOINT=http://{{ veza_container_prefix }}minio-1.{{ veza_incus_dns_suffix }}:9000
S3_REGION=us-east-1
S3_ACCESS_KEY={{ vault_minio_access_key }}
S3_SECRET_KEY={{ vault_minio_secret_key }}
S3_BUCKET=veza-{{ veza_env }}
# --- RabbitMQ (event bus — degraded mode tolerated, see lib.rs) ------
RABBITMQ_URL=amqp://veza:{{ vault_rabbitmq_password }}@{{ veza_container_prefix }}rabbitmq.{{ veza_incus_dns_suffix }}:5672/veza
# --- Observability ---------------------------------------------------
SENTRY_DSN={{ vault_sentry_dsn | default('') }}
OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector.{{ veza_incus_dns_suffix }}:4317
OTEL_SERVICE_NAME=veza-stream
OTEL_TRACES_SAMPLER=parentbased_traceidratio
OTEL_TRACES_SAMPLER_ARG={{ veza_otel_sample_rate }}
OTEL_RESOURCE_ATTRIBUTES=deployment.environment={{ veza_env }},service.version={{ veza_release_sha[:12] }}
# --- Streaming-specific -----------------------------------------------
# HLS segment cache lives under {{ veza_state_root }}/hls — sized small
# (~500 MB) since MinIO is the source of truth and segments are
# regenerated on miss.
HLS_CACHE_DIR={{ veza_state_root }}/hls
HLS_CACHE_MAX_BYTES=536870912
HLS_SEGMENT_DURATION_SECONDS=6

View file

@ -0,0 +1,35 @@
# Managed by Ansible — do not edit by hand.
# veza_app role, templates/veza-stream.service.j2.
# Released SHA: {{ veza_release_sha }} ; color: {{ veza_target_color }}
[Unit]
Description=Veza stream server (Rust/Axum) — color {{ veza_target_color }}, sha {{ veza_release_sha[:12] }}
Documentation=https://veza.fr/docs
After=network-online.target
Wants=network-online.target
AssertPathExists={{ veza_app_current_link }}/{{ veza_app_binary_name }}
[Service]
Type=simple
User={{ veza_app_user }}
Group={{ veza_app_group }}
EnvironmentFile=-{{ veza_app_env_file }}
WorkingDirectory={{ veza_app_current_link }}
ExecStart={{ veza_app_current_link }}/{{ veza_app_binary_name }}
Restart=on-failure
RestartSec=5s
# Stream server holds many WebSocket + HLS connections in flight ;
# the default LimitNOFILE=1024 chokes around 200 concurrent listeners.
LimitNOFILE=131072
# Hardening — same baseline as the backend.
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths={{ veza_app_install_dir }} {{ veza_log_root }} {{ veza_state_root }}
PrivateTmp=true
ProtectKernelTunables=true
ProtectKernelModules=true
ProtectControlGroups=true
[Install]
WantedBy=multi-user.target