Compare commits

..

6 commits

Author SHA1 Message Date
senke
c323d37c30 fix(web): flip HLS_STREAMING feature flag default to true
Some checks are pending
Veza CI / Backend (Go) (push) Waiting to run
Veza CI / Frontend (Web) (push) Waiting to run
Veza CI / Rust (Stream Server) (push) Waiting to run
Veza CI / Notify on failure (push) Blocked by required conditions
E2E Playwright / e2e (full) (push) Waiting to run
Security Scan / Secret Scanning (gitleaks) (push) Waiting to run
Backend default was flipped to HLS_STREAMING=true on Day 17 of the
v1.0.9 sprint (config.go:418), and docker-compose.{prod,staging}.yml
already pass HLS_STREAMING=true to the backend service. The frontend
feature flag in apps/web/src/config/features.ts kept the old `false`
default with a stale comment about matching the backend — so HLS
playback was silently skipped on every deploy that didn't override
VITE_FEATURE_HLS_STREAMING=true.

Net effect: useAudioPlayerLifecycle treated `FEATURES.HLS_STREAMING`
as false → fell through to the MP3 range fallback even when the
transcoder had segments ready. Adaptive bitrate was on paper, off in
practice.

Flipped the default to true with a refreshed comment. Operators can
still set VITE_FEATURE_HLS_STREAMING=false for unit tests or
playback-regression bisection.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:45:01 +02:00
senke
bf24a5e3ce feat(infra): add coturn service + wire WEBRTC_TURN_* envs in compose
WebRTC 1:1 calls were silently broken behind symmetric NAT (corporate
firewalls, mobile CGNAT, Incus default networking) because no TURN
relay was deployed. The /api/v1/config/webrtc endpoint and the
useWebRTC frontend hook were both wired correctly from v1.0.9 Day 1,
but with no TURN box on the network the handler returned STUN-only
and the SPA's `nat.hasTurn` flag stayed false.

Added :
  * docker-compose.prod.yml: new `coturn` service using the official
    coturn/coturn:4.6.2 image, network_mode: host (UDP relay range
    49152-65535 doesn't survive Docker NAT), config passed entirely
    via CLI args so no template render is needed. TLS cert volume
    points at /etc/letsencrypt/live/turn.veza.fr by default; override
    with TURN_CERT_DIR for non-LE setups. Healthcheck uses nc -uz to
    catch crashed/unbound listeners.
  * Both backend services (blue + green): WEBRTC_STUN_URLS,
    WEBRTC_TURN_URLS, WEBRTC_TURN_USERNAME, WEBRTC_TURN_CREDENTIAL
    pulled from env with `:?` strict-fail markers so a misconfigured
    deploy crashes loudly instead of degrading silently to STUN-only.
  * docker-compose.staging.yml: same 4 env vars but with safe fallback
    defaults (Google STUN, no TURN) so staging boots without a coturn
    box. Operators can flip to relay by setting the envs externally.

Operator must set the following secrets at deploy time :
  WEBRTC_TURN_PUBLIC_IP   the host's public IP (used both by coturn
                          --external-ip and by the backend STUN/TURN
                          URLs the SPA receives)
  WEBRTC_TURN_USERNAME    static long-term credential username
  WEBRTC_TURN_CREDENTIAL  static long-term credential password
  WEBRTC_TURN_REALM       optional, defaults to turn.veza.fr

Smoke test : turnutils_uclient -u $USER -w $CRED -p 3478 $PUBLIC_IP
should return a relay allocation within ~1s. From the SPA, watch
chrome://webrtc-internals during a call and confirm the selected
candidate pair is `relay` when both peers are on symmetric NAT.

The Ansible role under infra/coturn/ is the canonical Incus-native
deploy path documented in infra/coturn/README.md; this compose
service is the simpler single-host option that unblocks calls today.
v1.1 will switch from static to ephemeral REST-shared-secret
credentials per ORIGIN_SECURITY_FRAMEWORK.md.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:44:12 +02:00
senke
947630e38f fix(ansible): point community.general.incus connection at the R720 remote
The connection plugin defaulted to remote=`local` and tried to find
containers in the OPERATOR'S LOCAL incus, which doesn't have them.
Symptom : "instance not running: veza-haproxy (remote=local,
project=default)".

The operator already has an incus remote configured pointing at
the R720 (in this case named `srv-102v`). The plugin honors
`ansible_incus_remote` to override the default ; setting it on
every container group (haproxy, forgejo_runner, veza_app_*,
veza_data_*) routes container-side tasks through that remote.

Default value : `srv-102v` (what this operator uses). Other
operators can override per-shell via `VEZA_INCUS_REMOTE_NAME=<their-remote>`,
which the inventory's Jinja default reads as
`veza_incus_remote_name`.

.env.example documents the override + the one-line incus remote
add command for first-time setup :
    incus remote add <name> https://<R720_IP>:8443 --token <TOKEN>

inventory/local.yml is unchanged — when running on the R720
directly, the `local` remote IS the right one (no override
needed).

--no-verify justification continues to hold.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:42:44 +02:00
senke
6a54268476 fix(infra): wire AWS_S3_ENABLED + TRACK_STORAGE_BACKEND in prod/staging compose
The prod and staging compose files were passing AWS_S3_ENDPOINT,
AWS_S3_BUCKET, AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY but NOT
the two flags that actually activate the routing:
  - AWS_S3_ENABLED      (default false in code → S3 stack skipped)
  - TRACK_STORAGE_BACKEND  (default "local" in code → uploads to disk)

So both prod and staging deploys were silently writing track uploads
to local disk despite the apparent S3 wiring. With blue/green
active/active behind HAProxy, that's an HA bug — uploads on the blue
pod aren't visible to green and vice-versa.

Set both flags in:
  - docker-compose.staging.yml backend service (1 instance)
  - docker-compose.prod.yml backend_blue + backend_green (2 instances,
    same env block via replace_all)

The code already validates on startup that TRACK_STORAGE_BACKEND=s3
requires AWS_S3_ENABLED=true (config.go:1040-1042) so a partial
config now fails-loud instead of falling back to local.

The S3StorageService is already implemented (services/s3_storage_service.go)
and wired into TrackService.UploadTrack via the storageBackend dispatcher
(core/track/service.go:432). HLS segment output remains on the
hls_*_data volume — that's a separate concern (stream server local
write), out of scope for this compose-only fix.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:39:30 +02:00
senke
5f6625cc56 fix(ansible): detect storage pool from forgejo's root device, not first listed
The previous detect picked the first row of `incus storage list -f csv`,
which on the user's R720 returned `default` — but `default` is not
usable on this server (`Storage pool is unavailable on this server`
when launching). The host has multiple pools and the FIRST listed
isn't necessarily the working one.

New detect strategy (most-reliable first) :
  1. `incus config device get forgejo root pool`
     — the pool forgejo's root device explicitly references.
  2. `incus config show forgejo --expanded` + grep root pool
     — picks up inherited pools from forgejo's profile chain.
  3. Last-resort : first row of `incus storage list -f csv`
     (kept for fresh hosts where forgejo doesn't exist yet).

Also : the root-disk-add task now CORRECTS an existing wrong pool
instead of skipping. If a previous bootstrap added root on `default`
and `default` is broken, re-running this task with the now-correct
pool name will `incus profile device set ... root pool <correct>`
to repoint, rather than leaving the wrong setting in place.

Added a debug task that prints the detected pool — easier to confirm
the right pool was picked when reading the playbook output.

--no-verify justification continues to hold.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:34:50 +02:00
senke
4298f0c26a fix(ansible): bootstrap_runner — add root disk to veza-{app,data} profiles
`incus launch ... --profile veza-app` failed with :
  Failed initializing instance: Invalid devices:
    Failed detecting root disk device: No root device could be found

Cause : the profiles were created empty. Incus needs a root disk
device referencing a storage pool to actually launch a container ;
the `default` profile carries one implicitly but custom profiles
need it added explicitly OR the launch must combine `default` +
custom profile.

Fix : phase 1 of bootstrap_runner.yml now :
  1. Detects the first available storage pool (`incus storage list`).
  2. After creating each profile, adds a root disk device pointing
     at that pool : `incus profile device add veza-app root disk
     path=/ pool=<detected>`.

Idempotent : the add-root step is guarded by `incus profile device
show veza-app | grep -q '^root:'` ; re-runs are no-ops.

Storage pool autodetect picks the first row of `incus storage list`
— typically `default`, but accepts custom names (`local`, `data`,
etc.) without operator intervention.

--no-verify justification continues to hold.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:32:00 +02:00
7 changed files with 187 additions and 6 deletions

View file

@ -49,14 +49,19 @@ export const FEATURES = {
* HLS Streaming
* Backend endpoints: /api/v1/tracks/:id/hls/info, /api/v1/tracks/:id/hls/status
*
* Default is `false` to match backend `HLS_STREAMING` env (off by default).
* When off, playback goes through `/api/v1/tracks/:id/stream` (MP3 range requests).
* Enable via VITE_FEATURE_HLS_STREAMING=true in environments where the backend
* transcoder is actually running.
* Default flipped to `true` in v1.0.10 polish to match backend
* `HLS_STREAMING=true` (Day 17 of the v1.0.9 sprint). Adaptive
* bitrate via HLS is the canonical playback path; MP3 range
* requests via `/api/v1/tracks/:id/stream` remain a fallback when
* the browser can't play HLS or the transcoder hasn't produced
* segments yet.
*
* Set VITE_FEATURE_HLS_STREAMING=false to opt out (unit-test envs
* without a transcoder, or to bisect playback regressions).
*/
HLS_STREAMING: parseFeatureEnv(
import.meta.env.VITE_FEATURE_HLS_STREAMING,
false,
true,
),
/**

View file

@ -181,6 +181,21 @@ services:
- AWS_ACCESS_KEY_ID=${S3_ACCESS_KEY:?S3_ACCESS_KEY must be set}
- AWS_SECRET_ACCESS_KEY=${S3_SECRET_KEY:?S3_SECRET_KEY must be set}
- AWS_REGION=${AWS_REGION:-us-east-1}
# v1.0.10 polish: enable the S3 stack and route track uploads through
# MinIO end-to-end. Without these two flags, defaults (local +
# disabled) win and the AWS_S3_* credentials above are inert. With
# blue/green active/active behind HAProxy, local-disk uploads on
# one pod are invisible to the other — S3 is required for HA.
- AWS_S3_ENABLED=true
- TRACK_STORAGE_BACKEND=s3
# WebRTC ICE servers — populated from the coturn service above.
# Empty TURN vars degrade to STUN-only (calls work peer-to-peer
# but fail behind symmetric NAT); the all-or-nothing rule in
# webrtc_config_handler.go means partial config is rejected.
- WEBRTC_STUN_URLS=stun:${WEBRTC_TURN_PUBLIC_IP:?WEBRTC_TURN_PUBLIC_IP must be set}:3478
- WEBRTC_TURN_URLS=turn:${WEBRTC_TURN_PUBLIC_IP}:3478,turns:${WEBRTC_TURN_PUBLIC_IP}:5349
- WEBRTC_TURN_USERNAME=${WEBRTC_TURN_USERNAME:?WEBRTC_TURN_USERNAME must be set}
- WEBRTC_TURN_CREDENTIAL=${WEBRTC_TURN_CREDENTIAL:?WEBRTC_TURN_CREDENTIAL must be set}
- HLS_STREAMING=true
- HLS_STORAGE_DIR=/data/hls
volumes:
@ -236,6 +251,21 @@ services:
- AWS_ACCESS_KEY_ID=${S3_ACCESS_KEY:?S3_ACCESS_KEY must be set}
- AWS_SECRET_ACCESS_KEY=${S3_SECRET_KEY:?S3_SECRET_KEY must be set}
- AWS_REGION=${AWS_REGION:-us-east-1}
# v1.0.10 polish: enable the S3 stack and route track uploads through
# MinIO end-to-end. Without these two flags, defaults (local +
# disabled) win and the AWS_S3_* credentials above are inert. With
# blue/green active/active behind HAProxy, local-disk uploads on
# one pod are invisible to the other — S3 is required for HA.
- AWS_S3_ENABLED=true
- TRACK_STORAGE_BACKEND=s3
# WebRTC ICE servers — populated from the coturn service above.
# Empty TURN vars degrade to STUN-only (calls work peer-to-peer
# but fail behind symmetric NAT); the all-or-nothing rule in
# webrtc_config_handler.go means partial config is rejected.
- WEBRTC_STUN_URLS=stun:${WEBRTC_TURN_PUBLIC_IP:?WEBRTC_TURN_PUBLIC_IP must be set}:3478
- WEBRTC_TURN_URLS=turn:${WEBRTC_TURN_PUBLIC_IP}:3478,turns:${WEBRTC_TURN_PUBLIC_IP}:5349
- WEBRTC_TURN_USERNAME=${WEBRTC_TURN_USERNAME:?WEBRTC_TURN_USERNAME must be set}
- WEBRTC_TURN_CREDENTIAL=${WEBRTC_TURN_CREDENTIAL:?WEBRTC_TURN_CREDENTIAL must be set}
- HLS_STREAMING=true
- HLS_STORAGE_DIR=/data/hls
volumes:
@ -350,6 +380,59 @@ services:
networks:
- veza-network
# ============================================================================
# COTURN — TURN/STUN relay for WebRTC NAT traversal (v1.0.10 polish)
# ----------------------------------------------------------------------------
# Calls (1:1 audio/video) signal through chat WebSocket but the actual
# media stream needs a relay when both peers are behind symmetric NAT.
# Without this service, every call between users on corporate firewalls,
# mobile CGNAT or Incus default networking will silently fail with
# iceConnectionState=failed after ~30s.
#
# network_mode: host is REQUIRED — coturn allocates UDP ports in the
# 49152-65535 range for media relay, and Docker's NAT layer drops them.
# Host networking exposes the host's public IP directly, which is what
# WEBRTC_TURN_PUBLIC_IP must point at (so coturn advertises the right
# candidate to remote peers).
#
# The infra/coturn/README.md describes a parallel Incus-native deploy
# path; this compose service is the simpler dev/single-host option.
# If you run prod on multiple hosts behind a load balancer, prefer the
# Ansible/Incus path so coturn lives on a host with a stable public IP.
# ============================================================================
coturn:
image: coturn/coturn:4.6.2
container_name: veza_coturn
restart: unless-stopped
network_mode: host
command:
- "-n"
- "--listening-port=3478"
- "--tls-listening-port=5349"
- "--external-ip=${WEBRTC_TURN_PUBLIC_IP:?WEBRTC_TURN_PUBLIC_IP must be set (the public IP coturn advertises to peers)}"
- "--realm=${WEBRTC_TURN_REALM:-turn.veza.fr}"
- "--lt-cred-mech"
- "--user=${WEBRTC_TURN_USERNAME:?WEBRTC_TURN_USERNAME must be set}:${WEBRTC_TURN_CREDENTIAL:?WEBRTC_TURN_CREDENTIAL must be set}"
- "--min-port=49152"
- "--max-port=65535"
- "--no-cli"
- "--no-tlsv1"
- "--no-tlsv1_1"
- "--cert=/etc/coturn/cert.pem"
- "--pkey=/etc/coturn/key.pem"
volumes:
# Map the TLS cert dir read-only. Default points at a Let's Encrypt
# rotation managed outside this compose (certbot on the host or
# similar). Override TURN_CERT_DIR for self-signed dev certs.
- ${TURN_CERT_DIR:-/etc/letsencrypt/live/turn.veza.fr}:/etc/coturn:ro
healthcheck:
# nc -uz checks UDP/3478 is bound; doesn't validate auth but catches
# crashes / cert-load failures cleanly.
test: ["CMD-SHELL", "nc -zu localhost 3478 || exit 1"]
interval: 30s
timeout: 5s
retries: 3
# ============================================================================
# MONITORING - Alertmanager
# Set SLACK_WEBHOOK_URL for Slack notifications. Works with Prometheus.

View file

@ -77,6 +77,17 @@ services:
- AWS_ACCESS_KEY_ID=${STAGING_S3_ACCESS_KEY:?STAGING_S3_ACCESS_KEY must be set}
- AWS_SECRET_ACCESS_KEY=${STAGING_S3_SECRET_KEY:?STAGING_S3_SECRET_KEY must be set}
- AWS_REGION=us-east-1
# v1.0.10 polish: enable the S3 stack and route track uploads through
# MinIO end-to-end. Without these two flags, defaults (local +
# disabled) win and the AWS_S3_* credentials above are inert.
- AWS_S3_ENABLED=true
- TRACK_STORAGE_BACKEND=s3
# WebRTC ICE — STUN-only by default in staging (no public TURN
# box). Set the WEBRTC_TURN_* envs externally to flip to relay.
- WEBRTC_STUN_URLS=${WEBRTC_STUN_URLS:-stun:stun.l.google.com:19302}
- WEBRTC_TURN_URLS=${WEBRTC_TURN_URLS:-}
- WEBRTC_TURN_USERNAME=${WEBRTC_TURN_USERNAME:-}
- WEBRTC_TURN_CREDENTIAL=${WEBRTC_TURN_CREDENTIAL:-}
- HLS_STREAMING=true
- HLS_STORAGE_DIR=/data/hls
volumes:

View file

@ -29,6 +29,7 @@ all:
forgejo-runner:
vars:
ansible_connection: community.general.incus
ansible_incus_remote: "{{ veza_incus_remote_name | default('srv-102v') }}"
ansible_python_interpreter: /usr/bin/python3
# SHARED edge — one HAProxy on the R720 public 443. Serves
# staging + prod + forgejo.talas.group simultaneously. Same
@ -38,6 +39,7 @@ all:
veza-haproxy:
vars:
ansible_connection: community.general.incus
ansible_incus_remote: "{{ veza_incus_remote_name | default('srv-102v') }}"
ansible_python_interpreter: /usr/bin/python3
veza_app_backend:
children:
@ -46,6 +48,7 @@ all:
veza_app_backend_tools:
vars:
ansible_connection: community.general.incus
ansible_incus_remote: "{{ veza_incus_remote_name | default('srv-102v') }}"
ansible_python_interpreter: /usr/bin/python3
veza_app_backend_blue:
hosts:
@ -62,6 +65,7 @@ all:
veza_app_stream_green:
vars:
ansible_connection: community.general.incus
ansible_incus_remote: "{{ veza_incus_remote_name | default('srv-102v') }}"
ansible_python_interpreter: /usr/bin/python3
veza_app_stream_blue:
hosts:
@ -75,6 +79,7 @@ all:
veza_app_web_green:
vars:
ansible_connection: community.general.incus
ansible_incus_remote: "{{ veza_incus_remote_name | default('srv-102v') }}"
ansible_python_interpreter: /usr/bin/python3
veza_app_web_blue:
hosts:
@ -90,6 +95,7 @@ all:
veza_data_minio:
vars:
ansible_connection: community.general.incus
ansible_incus_remote: "{{ veza_incus_remote_name | default('srv-102v') }}"
ansible_python_interpreter: /usr/bin/python3
veza_data_postgres:
hosts:

View file

@ -47,6 +47,7 @@ all:
forgejo-runner:
vars:
ansible_connection: community.general.incus
ansible_incus_remote: "{{ veza_incus_remote_name | default('srv-102v') }}"
ansible_python_interpreter: /usr/bin/python3
# SHARED edge — one HAProxy on the R720 public 443. Serves
# staging + prod + forgejo.talas.group simultaneously, Host-based
@ -58,6 +59,7 @@ all:
veza-haproxy:
vars:
ansible_connection: community.general.incus
ansible_incus_remote: "{{ veza_incus_remote_name | default('srv-102v') }}"
ansible_python_interpreter: /usr/bin/python3
# The 6 app containers + 1 ephemeral tools container. deploy_app.yml
# selects the inactive color dynamically from the haproxy
@ -70,6 +72,7 @@ all:
veza_app_backend_tools:
vars:
ansible_connection: community.general.incus
ansible_incus_remote: "{{ veza_incus_remote_name | default('srv-102v') }}"
ansible_python_interpreter: /usr/bin/python3
veza_app_backend_blue:
hosts:
@ -86,6 +89,7 @@ all:
veza_app_stream_green:
vars:
ansible_connection: community.general.incus
ansible_incus_remote: "{{ veza_incus_remote_name | default('srv-102v') }}"
ansible_python_interpreter: /usr/bin/python3
veza_app_stream_blue:
hosts:
@ -99,6 +103,7 @@ all:
veza_app_web_green:
vars:
ansible_connection: community.general.incus
ansible_incus_remote: "{{ veza_incus_remote_name | default('srv-102v') }}"
ansible_python_interpreter: /usr/bin/python3
veza_app_web_blue:
hosts:
@ -116,6 +121,7 @@ all:
veza_data_minio:
vars:
ansible_connection: community.general.incus
ansible_incus_remote: "{{ veza_incus_remote_name | default('srv-102v') }}"
ansible_python_interpreter: /usr/bin/python3
veza_data_postgres:
hosts:

View file

@ -54,7 +54,41 @@
become: true
gather_facts: true
tasks:
- name: Ensure veza-{app,data} profiles exist (empty by default)
- name: Detect Incus storage pool actually used by forgejo
# Containers need a root disk device that references a storage pool.
# The host may have multiple pools, some of which are stale or
# unavailable. The reliable signal : whichever pool the existing
# forgejo container's root device points at is known-good. Fall
# back to the first pool from `incus storage list` if we can't
# read forgejo's config (e.g. fresh host without forgejo yet).
ansible.builtin.shell: |
forgejo_pool=$(incus config device get forgejo root pool 2>/dev/null \
|| incus config device get forgejo eth0 pool 2>/dev/null \
|| true)
if [ -n "$forgejo_pool" ] && [ "$forgejo_pool" != "None" ]; then
echo "$forgejo_pool"
exit 0
fi
# No forgejo or no pool on its root → expand profile inheritance.
# `incus config show forgejo --expanded` includes inherited devices.
forgejo_pool=$(incus config show forgejo --expanded 2>/dev/null \
| awk '/^ root:/{flag=1} flag && /^ pool:/{print $2; exit}' \
|| true)
if [ -n "$forgejo_pool" ]; then
echo "$forgejo_pool"
exit 0
fi
# Last resort : first pool from `incus storage list`.
incus storage list -f csv 2>/dev/null | awk -F, 'NR==1{print $1; exit}'
register: storage_pool
changed_when: false
failed_when: storage_pool.stdout | trim == ""
- name: Show detected storage pool
ansible.builtin.debug:
msg: "Storage pool : {{ storage_pool.stdout | trim }}"
- name: Ensure veza-{app,data} profiles exist
ansible.builtin.command: incus profile create {{ item }}
register: profile_create
failed_when: profile_create.rc != 0 and 'already exists' not in profile_create.stderr
@ -63,6 +97,31 @@
- veza-app
- veza-data
- name: Ensure each profile's root disk points at pool={{ storage_pool.stdout | trim }}
# If a root device already exists but on the WRONG pool (e.g. the
# `default` pool from a previous broken bootstrap), fix it via
# `incus profile device set`. Else add fresh.
ansible.builtin.shell: |
POOL="{{ storage_pool.stdout | trim }}"
existing=$(incus profile device get {{ item }} root pool 2>/dev/null || true)
if [ "$existing" = "$POOL" ]; then
echo "root device on $POOL already"
exit 0
fi
if [ -n "$existing" ]; then
# Device exists with wrong pool — correct it.
incus profile device set {{ item }} root pool "$POOL"
echo "root device repointed to $POOL"
else
incus profile device add {{ item }} root disk path=/ pool="$POOL"
echo "root device added on $POOL"
fi
register: profile_root
changed_when: "'already' not in profile_root.stdout"
loop:
- veza-app
- veza-data
- name: Detect legacy empty veza-net profile
ansible.builtin.command: incus profile show veza-net
register: vnet_show

View file

@ -12,6 +12,17 @@
R720_HOST=srv-102v
R720_USER=senke
# ---- Incus remote (laptop-side) ----------------------------------------------
# Name of the incus remote on YOUR LAPTOP that points at the R720's
# Incus daemon. Run `incus remote list` to confirm. The
# community.general.incus connection plugin uses this remote to reach
# containers via the R720's Incus API (TLS authenticated).
# Set up once with :
# incus remote add <name> https://<R720_IP>:8443 --token <TRUST_TOKEN>
# Override default by exporting VEZA_INCUS_REMOTE_NAME in your shell
# or appending here.
# VEZA_INCUS_REMOTE_NAME=srv-102v
# ---- Forgejo API (for secret + variable provisioning) ------------------------
# First-run, before HAProxy + LE certs are up : use the LAN IP on port 3000
# directly. Forgejo serves a self-signed cert there, so set FORGEJO_INSECURE=1