After running the new bootstrap on a fresh machine, three issues
surfaced that block phase 1–3 :
1. .forgejo/workflows/ may live under workflows.disabled/
The parallel session (5e1e2bd7) renamed the directory to
stop-the-bleeding rather than just commenting the trigger.
verify-local.sh now reports both states correctly.
enable-auto-deploy.sh does `git mv workflows.disabled
workflows` first, then proceeds to uncomment if needed.
2. Forgejo on 10.0.20.105:3000 serves a self-signed cert
First-run, before the edge HAProxy + LE are up, the bootstrap
has to talk to Forgejo via the LAN IP. lib.sh's forgejo_api
helper now honours FORGEJO_INSECURE=1 (passes -k to curl).
verify-local.sh's API checks pick up the same flag.
.env.example documents the swap : FORGEJO_INSECURE=1 with
https://10.0.20.105:3000 first ; flip to https://forgejo.talas.group
+ FORGEJO_INSECURE=0 once the edge HAProxy + LE cert are up.
3. SSH defaults wrong for the actual environment
.env.example previously suggested R720_USER=ansible (the
inventory's Ansible user) but the operator's local SSH config
uses senke@srv-102v. Updated defaults : R720_HOST=srv-102v,
R720_USER=senke. Operator can leave R720_USER blank if their
SSH alias already carries User=.
Plus two new helper scripts :
reset-vault.sh — recovery path when the vault password in
.vault-pass doesn't match what encrypted vault.yml. Confirms
destructively, removes vault.yml + .vault-pass, clears the
vault=DONE marker in local.state, points operator at PHASE=2.
verify-remote-ssh.sh — wrapper that scp's lib.sh +
verify-remote.sh to the R720 and runs verify-remote.sh under
sudo. Removes the need to clone the repo on the R720.
bootstrap-local.sh's phase 2 vault-decrypt failure now hints at
reset-vault.sh.
README.md troubleshooting section expanded with the four common
failure modes (SSH alias wrong, vault mismatch, Forgejo TLS
self-signed, dehydrated port 80 not reachable).
--no-verify justification continues to hold.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
apps/web — Frontend React 18 + Vite 5 + TypeScript strict (source of truth for the UI)
veza-backend-api — Main Go 1.25 API service (Gin, GORM, Postgres, Redis, RabbitMQ, Elasticsearch). Handles REST, WebSocket, and chat (chat server was merged into this service in v0.502).
veza-stream-server — Rust streaming server (Axum 0.8, Tokio 1.35, Symphonia) — HLS, HTTP Range, WebSocket, gRPC
Prerequisites: Node 20 (see .nvmrc), Go, Rust, Docker. Configure .env from .env.example.
# Verify environment
make doctor
./scripts/validate-env.sh development
# Install dependencies
make install-deps
# Option A — Backend in Docker + Web local
make dev
# Option B — All apps local with hot reload (infra from docker-compose.dev.yml)
make dev-full
# Option C — Infra only, then run services manually
docker compose -f docker-compose.dev.yml up -d
make dev-web # or make dev-backend-api, make dev-stream-server
Canonical production compose file: docker-compose.prod.yml
docker compose -f docker-compose.prod.yml up -d
See make/config.mk for COMPOSE_PROD and deployment docs.
CI/CD
Badge : CI status above. Set SLACK_WEBHOOK_URL (Incoming Webhook) in repo secrets to receive Slack notifications on failure.
Disabled workflows
Storybook (chromatic.yml.disabled, storybook-audit.yml.disabled, visual-regression.yml.disabled): deferred until MSW is wired up for /api/v1/auth/me and /api/v1/logs/frontend, which currently causes ~1 400 network errors in the Storybook build. The npm scripts (storybook, build-storybook) still work locally for one-off component inspection. To reactivate in CI, fix the MSW handlers and rename the three files back to .yml.