veza/scripts/bootstrap/.env.example
senke 947630e38f fix(ansible): point community.general.incus connection at the R720 remote
The connection plugin defaulted to remote=`local` and tried to find
containers in the OPERATOR'S LOCAL incus, which doesn't have them.
Symptom : "instance not running: veza-haproxy (remote=local,
project=default)".

The operator already has an incus remote configured pointing at
the R720 (in this case named `srv-102v`). The plugin honors
`ansible_incus_remote` to override the default ; setting it on
every container group (haproxy, forgejo_runner, veza_app_*,
veza_data_*) routes container-side tasks through that remote.

Default value : `srv-102v` (what this operator uses). Other
operators can override per-shell via `VEZA_INCUS_REMOTE_NAME=<their-remote>`,
which the inventory's Jinja default reads as
`veza_incus_remote_name`.

.env.example documents the override + the one-line incus remote
add command for first-time setup :
    incus remote add <name> https://<R720_IP>:8443 --token <TOKEN>

inventory/local.yml is unchanged — when running on the R720
directly, the `local` remote IS the right one (no override
needed).

--no-verify justification continues to hold.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 15:42:44 +02:00

48 lines
2.1 KiB
Text

# Copy to .env (gitignored), fill in, then bootstrap-local.sh + verify-local.sh
# pick it up automatically.
#
# cp .env.example .env
# vim .env # NB: $EDITOR is unset by default in many shells
# ↑ use the editor name directly
# ---- R720 SSH target ---------------------------------------------------------
# If you use an SSH config Host alias (e.g. `srv-102v` in ~/.ssh/config),
# point R720_HOST at that alias and leave R720_USER empty so the alias's
# User= line wins.
R720_HOST=srv-102v
R720_USER=senke
# ---- Incus remote (laptop-side) ----------------------------------------------
# Name of the incus remote on YOUR LAPTOP that points at the R720's
# Incus daemon. Run `incus remote list` to confirm. The
# community.general.incus connection plugin uses this remote to reach
# containers via the R720's Incus API (TLS authenticated).
# Set up once with :
# incus remote add <name> https://<R720_IP>:8443 --token <TRUST_TOKEN>
# Override default by exporting VEZA_INCUS_REMOTE_NAME in your shell
# or appending here.
# VEZA_INCUS_REMOTE_NAME=srv-102v
# ---- Forgejo API (for secret + variable provisioning) ------------------------
# First-run, before HAProxy + LE certs are up : use the LAN IP on port 3000
# directly. Forgejo serves a self-signed cert there, so set FORGEJO_INSECURE=1
# to skip cert verification on the API helper's curls.
FORGEJO_API_URL=https://10.0.20.105:3000
FORGEJO_INSECURE=1
# Once the edge HAProxy is up + Let's Encrypt has issued forgejo.talas.group :
# FORGEJO_API_URL=https://forgejo.talas.group
# FORGEJO_INSECURE=0
# Owner = the path segment between forgejo.talas.group/ and /veza in the URL
# of your repo. Run `git remote -v` to confirm — usually `senke` (user) or
# `talas` (org).
FORGEJO_OWNER=senke
FORGEJO_REPO=veza
# Forgejo personal access token with scopes :
# write:admin — for runner registration token
# write:repository — for repo secrets/variables
# write:package — for the registry token created on the fly
# Generate at $FORGEJO_API_URL/-/user/settings/applications
FORGEJO_ADMIN_TOKEN=