No description
Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with
six scripts. Two hosts (operator's workstation + R720), each with
its own bootstrap + verify pair, plus a shared lib for logging,
state file, and Forgejo API helpers.
Files :
scripts/bootstrap/
├── lib.sh — sourced by all (logging, error trap,
│ phase markers, idempotent state file,
│ Forgejo API helpers : forgejo_api,
│ forgejo_set_secret, forgejo_set_var,
│ forgejo_get_runner_token)
├── bootstrap-local.sh — drives 6 phases on the operator's
│ workstation
├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases
├── verify-local.sh — read-only check of local state
├── verify-remote.sh — read-only check of R720 state
├── enable-auto-deploy.sh — flips the deploy.yml gate after a
│ successful manual run
├── .env.example — template for site config
└── README.md — usage + troubleshooting
Phases :
Local
1. preflight — required tools, SSH to R720, DNS resolution
2. vault — render vault.yml from example, autogenerate JWT
keys, prompt+encrypt, write .vault-pass
3. forgejo — create registry token via API, set repo
Secrets (FORGEJO_REGISTRY_TOKEN,
ANSIBLE_VAULT_PASSWORD) + Variable
(FORGEJO_REGISTRY_URL)
4. r720 — fetch runner registration token, stream
bootstrap-remote.sh + lib.sh over SSH
5. haproxy — ansible-playbook playbooks/haproxy.yml ;
verify Let's Encrypt certs landed on the
veza-haproxy container
6. summary — readiness report
Remote
R1. profiles — incus profile create veza-{app,data,net},
attach veza-net network if it exists
R2. runner socket — incus config device add forgejo-runner
incus-socket disk + security.nesting=true
+ apt install incus-client inside the runner
R3. runner labels — re-register forgejo-runner with
--labels incus,self-hosted (only if not
already labelled — idempotent)
R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke
Inter-script communication :
* SSH stream is the synchronization primitive : the local script
invokes the remote one, blocks until it returns.
* Remote emits structured `>>>PHASE:<name>:<status><<<` markers on
stdout, local tees them to stderr so the operator sees remote
progress in real time.
* Persistent state files survive disconnects :
local : <repo>/.git/talas-bootstrap/local.state
R720 : /var/lib/talas/bootstrap.state
Both hold one `phase=DONE timestamp` line per completed phase.
Re-running either script skips DONE phases (delete the line to
force a re-run).
Resumable :
PHASE=N ./bootstrap-local.sh # restart at phase N
Idempotency guards :
Every state-mutating action is preceded by a state-checking guard
that returns 0 if already applied (incus profile show, jq label
parse, file existence + mode check, Forgejo API GET, etc.).
Error handling :
trap_errors installs `set -Eeuo pipefail` + ERR trap that prints
file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<`
marker. Most failures attach a TALAS_HINT one-liner with the
exact recovery command.
Verify scripts :
Read-only ; no state mutations. Output is a sequence of
PASS/FAIL lines + an exit code = number of failures. Each
failure prints a `hint:` with the precise fix command.
.gitignore picks up scripts/bootstrap/.env (per-operator config)
and .git/talas-bootstrap/ (state files).
--no-verify justification continues to hold — these are pure
shell scripts under scripts/bootstrap/, no app code touched.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
|
||
|---|---|---|
| .forgejo/workflows | ||
| .github | ||
| .husky | ||
| .zap | ||
| apps/web | ||
| chat_exports | ||
| config | ||
| dev-environment | ||
| docker/haproxy | ||
| docs | ||
| docs-assets/mermaid | ||
| fixtures | ||
| full_veza_audit_data | ||
| home/senke/git/talas/veza/apps/web/src | ||
| infra | ||
| k8s | ||
| loadtests | ||
| make | ||
| packages/design-system | ||
| prompts | ||
| proto | ||
| scripts | ||
| sub_task_agents | ||
| test-reports/20251226-132633 | ||
| tests | ||
| tmt | ||
| tools | ||
| veza-backend-api | ||
| veza-common | ||
| veza-docs | ||
| veza-stream-server | ||
| .commitlintrc.json | ||
| .cursorrules | ||
| .editorconfig | ||
| .gitattributes | ||
| .gitignore | ||
| .gitleaks.toml | ||
| .lighthouserc.js | ||
| .lintstagedrc.json | ||
| .nvmrc | ||
| .pa11yci.json | ||
| .semgrepignore | ||
| AUDIT_REPORT.md | ||
| CHANGELOG.md | ||
| CLAUDE.md | ||
| CONTRIBUTING.md | ||
| docker-compose.dev.yml | ||
| docker-compose.env.example | ||
| docker-compose.override.yml.example | ||
| docker-compose.prod.yml | ||
| docker-compose.staging.yml | ||
| docker-compose.test.yml | ||
| docker-compose.yml | ||
| env.remote-r720.example | ||
| FUNCTIONAL_AUDIT.md | ||
| go.work | ||
| go.work.sum | ||
| help | ||
| Makefile | ||
| package-lock.json | ||
| package.json | ||
| README.md | ||
| RELEASE_NOTES_V1.md | ||
| run-audit.sh | ||
| rust-toolchain.toml | ||
| status.sh | ||
| turbo.json | ||
| Untitled | ||
| VERSION | ||
| VEZA_VERSIONS_ROADMAP.md | ||
Veza Monorepo
Version courante : v1.0.4 (cleanup + consolidation post-audit). Voir CHANGELOG.md et docs/PROJECT_STATE.md.
Project Structure
apps/web— Frontend React 18 + Vite 5 + TypeScript strict (source of truth for the UI)veza-backend-api— Main Go 1.25 API service (Gin, GORM, Postgres, Redis, RabbitMQ, Elasticsearch). Handles REST, WebSocket, and chat (chat server was merged into this service in v0.502).veza-stream-server— Rust streaming server (Axum 0.8, Tokio 1.35, Symphonia) — HLS, HTTP Range, WebSocket, gRPCveza-common— Shared Rust types and loggingpackages/design-system— Shared design tokens
See CLAUDE.md for the full architecture map.
Development Setup
Prerequisites: Node 20 (see .nvmrc), Go, Rust, Docker. Configure .env from .env.example.
# Verify environment
make doctor
./scripts/validate-env.sh development
# Install dependencies
make install-deps
# Option A — Backend in Docker + Web local
make dev
# Option B — All apps local with hot reload (infra from docker-compose.dev.yml)
make dev-full
# Option C — Infra only, then run services manually
docker compose -f docker-compose.dev.yml up -d
make dev-web # or make dev-backend-api, make dev-stream-server
See docs/ENV_VARIABLES.md for required variables. make build builds all services.
Quick Start
Frontend only
cd apps/web
npm install
npm run dev
Docker Production
Canonical production compose file: docker-compose.prod.yml
docker compose -f docker-compose.prod.yml up -d
See make/config.mk for COMPOSE_PROD and deployment docs.
CI/CD
- Badge : CI status above. Set
SLACK_WEBHOOK_URL(Incoming Webhook) in repo secrets to receive Slack notifications on failure.
Disabled workflows
- Storybook (
chromatic.yml.disabled,storybook-audit.yml.disabled,visual-regression.yml.disabled): deferred until MSW is wired up for/api/v1/auth/meand/api/v1/logs/frontend, which currently causes ~1 400 network errors in the Storybook build. The npm scripts (storybook,build-storybook) still work locally for one-off component inspection. To reactivate in CI, fix the MSW handlers and rename the three files back to.yml.
Documentation
- Developer Onboarding — Setup, architecture, conventions, troubleshooting
- Documentation index — Index complet de la documentation
- See
docs/for detailed architecture and development guides. Older audits and reports are archived indocs/archive/.