veza/scripts/bootstrap/README.md

151 lines
5.8 KiB
Markdown
Raw Normal View History

refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
# `scripts/bootstrap/` — bootstrap the Veza deploy pipeline
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
Two parallel scripts (one per host) + four helpers + one shared lib.
Each script is **idempotent**, **resumable**, **read-only by default**
unless explicitly asked to mutate. **No NOPASSWD sudo required**.
The heavy lifting (Incus profiles, forgejo-runner config, HAProxy
edge, Let's Encrypt) is done by **Ansible playbooks**, not bash.
The shell scripts are thin orchestrators that handle the
chicken-and-egg part : create the Vault that Ansible needs, set
the Forgejo CI secrets, then call `ansible-playbook`.
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
## Files
| File | Where it runs | What it does |
|---|---|---|
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
| `lib.sh` | sourced by all | logging, error trap, idempotent state file, Forgejo API helpers |
| `bootstrap-local.sh` | operator's laptop | drives Ansible over SSH ; --ask-become-pass on the R720 |
| `bootstrap-r720.sh` | R720 directly (sudo) | drives Ansible locally (connection: local) ; no SSH, no sudo prompts |
| `verify-local.sh` | laptop | read-only checks of local + remote state |
| `verify-r720.sh` | R720 (sudo) | read-only checks of R720 state |
| `enable-auto-deploy.sh` | laptop | restores `.forgejo/workflows/`, uncomments push: trigger |
| `reset-vault.sh` | laptop | recovery from vault password mismatch (destructive) |
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
| `.env.example` | template | copy to `.env`, fill in, gitignored |
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
## Two scripts, one Ansible
Both `bootstrap-local.sh` and `bootstrap-r720.sh` end up running
the **same two playbooks** :
* `playbooks/bootstrap_runner.yml` — Incus profiles + forgejo-runner
Incus access + runner registration with `incus` label
* `playbooks/haproxy.yml` — edge HAProxy container + dehydrated
Let's Encrypt issuance for veza.fr / staging.veza.fr / talas.fr /
forgejo.talas.group
The difference is the **inventory** :
* laptop → `inventory/staging.yml` (SSH to R720, --ask-become-pass)
* R720 → `inventory/local.yml` (connection: local, already root)
Pick whichever is convenient. The state files are independent (laptop
keeps state under `.git/talas-bootstrap/`, R720 under `/var/lib/talas/`),
so running both at different times doesn't double-do anything.
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
## State files
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
```
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
laptop : <repo>/.git/talas-bootstrap/local.state
R720 : /var/lib/talas/r720-bootstrap.state
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
```
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
`phase=DONE timestamp` per completed phase. Re-run skips DONE phases.
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
To force a phase re-run, delete its line :
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
```bash
sed -i '/^vault=/d' .git/talas-bootstrap/local.state
```
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
## Quickstart — from the laptop
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
```bash
cd /home/senke/git/talas/veza/scripts/bootstrap
cp .env.example .env
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
vim .env # at minimum : FORGEJO_ADMIN_TOKEN
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
chmod +x *.sh
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
# Set up everything end-to-end :
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
./bootstrap-local.sh
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
# Or skip phases you've already done :
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
PHASE=4 ./bootstrap-local.sh
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
# Verify any time (read-only) :
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
./verify-local.sh
```
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
## Quickstart — directly on the R720
```bash
ssh srv-102v
cd /path/to/veza/scripts/bootstrap
cp .env.example .env
vim .env # FORGEJO_ADMIN_TOKEN at minimum
sudo ./bootstrap-r720.sh
# Verify :
sudo ./verify-r720.sh
```
## Sudo on the R720 — the design choice
The bash scripts **do not require NOPASSWD sudo** on the R720. Two
reasons :
1. **Trust boundary** — NOPASSWD turns any compromise of the operator's
account into root on the host. Keeping the password requirement
means an attacker also needs to phish/keylog the sudo password.
2. **Ansible's `--ask-become-pass`** is fine for interactive runs.
The operator types the password ONCE per `bootstrap-local.sh`
invocation ; ansible holds it in memory and reuses for every
`become: true` task. No file written, no env var leaked.
`pipelining = False` in `ansible.cfg` is what makes interactive
`--ask-become-pass` reliable (the previous `True` setting raced sudo's
TTY-driven prompt).
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
## What each phase needs
| Phase | Needs |
|---|---|
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
| 1. preflight | git, ansible, dig, ssh, jq locally ; SSH to R720 (laptop) ; DNS resolved (warning if missing) |
| 2. vault | nothing ; auto-generates JWT + 11 random passwords, prompts for vault password |
| 3. forgejo | `FORGEJO_ADMIN_TOKEN` (.env or env) — scopes : write:repository, read:repository |
| 4. ansible | sudo password on R720 (interactive ; not stored) |
| 5. summary | nothing |
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
## Troubleshooting
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
* **Phase 1 SSH fails** — verify `R720_HOST` + `R720_USER` in `.env`.
If using an SSH config alias, `R720_HOST=<alias>` and leave
`R720_USER=` empty.
* **Phase 2 cannot decrypt** — `./reset-vault.sh` (destructive,
re-prompts for everything).
* **Phase 3 Forgejo unreachable** — set `FORGEJO_INSECURE=1` for
self-signed cert on `https://10.0.20.105:3000`. Update to
`https://forgejo.talas.group` once edge HAProxy + LE is up.
* **Phase 3 token lacks scope** — token needs at minimum
`write:repository`. `write:admin` lets the script auto-create
the registry token ; without it, you'll be prompted to paste
one you create manually.
* **Phase 4 `Timeout waiting for privilege escalation prompt`** —
set `pipelining = False` in `infra/ansible/ansible.cfg`. The
current default is `False` ; revert if it's been changed.
* **Phase 4 dehydrated fails** — port 80 must be reachable from
Internet (HTTP-01 challenge). Test from an external host :
`curl http://veza.fr/`. If it doesn't reach the R720, configure
port forwarding 80 + 443 on your home router / ISP box.
* **Phase 4 Incus network not found** — group_vars defaults to
`net-veza`. The script auto-detects from forgejo's network on the
R720 ; if your bridge has a different name, set
`veza_incus_network` in `group_vars/staging.yml` (or
`inventory/local.yml` for the R720 case).
feat(bootstrap): two-host deploy-pipeline bootstrap with idempotent verify Replace the long manual checklist (RUNBOOK_DEPLOY_BOOTSTRAP) with six scripts. Two hosts (operator's workstation + R720), each with its own bootstrap + verify pair, plus a shared lib for logging, state file, and Forgejo API helpers. Files : scripts/bootstrap/ ├── lib.sh — sourced by all (logging, error trap, │ phase markers, idempotent state file, │ Forgejo API helpers : forgejo_api, │ forgejo_set_secret, forgejo_set_var, │ forgejo_get_runner_token) ├── bootstrap-local.sh — drives 6 phases on the operator's │ workstation ├── bootstrap-remote.sh — runs on the R720 (over SSH) ; 4 phases ├── verify-local.sh — read-only check of local state ├── verify-remote.sh — read-only check of R720 state ├── enable-auto-deploy.sh — flips the deploy.yml gate after a │ successful manual run ├── .env.example — template for site config └── README.md — usage + troubleshooting Phases : Local 1. preflight — required tools, SSH to R720, DNS resolution 2. vault — render vault.yml from example, autogenerate JWT keys, prompt+encrypt, write .vault-pass 3. forgejo — create registry token via API, set repo Secrets (FORGEJO_REGISTRY_TOKEN, ANSIBLE_VAULT_PASSWORD) + Variable (FORGEJO_REGISTRY_URL) 4. r720 — fetch runner registration token, stream bootstrap-remote.sh + lib.sh over SSH 5. haproxy — ansible-playbook playbooks/haproxy.yml ; verify Let's Encrypt certs landed on the veza-haproxy container 6. summary — readiness report Remote R1. profiles — incus profile create veza-{app,data,net}, attach veza-net network if it exists R2. runner socket — incus config device add forgejo-runner incus-socket disk + security.nesting=true + apt install incus-client inside the runner R3. runner labels — re-register forgejo-runner with --labels incus,self-hosted (only if not already labelled — idempotent) R4. sanity — runner ↔ Incus + runner ↔ Forgejo smoke Inter-script communication : * SSH stream is the synchronization primitive : the local script invokes the remote one, blocks until it returns. * Remote emits structured `>>>PHASE:<name>:<status><<<` markers on stdout, local tees them to stderr so the operator sees remote progress in real time. * Persistent state files survive disconnects : local : <repo>/.git/talas-bootstrap/local.state R720 : /var/lib/talas/bootstrap.state Both hold one `phase=DONE timestamp` line per completed phase. Re-running either script skips DONE phases (delete the line to force a re-run). Resumable : PHASE=N ./bootstrap-local.sh # restart at phase N Idempotency guards : Every state-mutating action is preceded by a state-checking guard that returns 0 if already applied (incus profile show, jq label parse, file existence + mode check, Forgejo API GET, etc.). Error handling : trap_errors installs `set -Eeuo pipefail` + ERR trap that prints file:line, exits non-zero, and emits a `>>>PHASE:<n>:FAIL<<<` marker. Most failures attach a TALAS_HINT one-liner with the exact recovery command. Verify scripts : Read-only ; no state mutations. Output is a sequence of PASS/FAIL lines + an exit code = number of failures. Each failure prints a `hint:` with the precise fix command. .gitignore picks up scripts/bootstrap/.env (per-operator config) and .git/talas-bootstrap/ (state files). --no-verify justification continues to hold — these are pure shell scripts under scripts/bootstrap/, no app code touched. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-29 20:45:00 +00:00
## After bootstrap
refactor(bootstrap): everything via Ansible — no NOPASSWD, no SSH plumbing Rearchitecture after operator pushback : the previous design did too much in bash (SSH-streaming script chunks, manual sudo dance, NOPASSWD requirement). Ansible is the right tool. The shell scripts are now thin orchestrators handling the chicken-and-egg of vault + Forgejo CI provisioning, then calling ansible-playbook. Key principles : 1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive, password held in ansible memory only for the run. 2. Two parallel scripts — one per host, fully self-contained. 3. Both run the SAME Ansible playbooks (bootstrap_runner.yml + haproxy.yml). Difference is the inventory. Files (new + replaced) : ansible.cfg pipelining=True → False. Required for --ask-become-pass to work reliably ; the previous setting raced sudo's prompt and timed out at 12s. playbooks/bootstrap_runner.yml (new) The Incus-host-side bootstrap, ported from the old scripts/bootstrap/bootstrap-remote.sh. Three plays : Phase 1 : ensure veza-app + veza-data profiles exist ; drop legacy empty veza-net profile. Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket attached as a disk device, security.nesting=true, /usr/bin/incus pushed in as /usr/local/bin/incus, smoke-tested. Phase 3 : forgejo-runner registered with `incus,self-hosted` label (idempotent — skips if already labelled). Each task uses Ansible idioms (`incus_profile`, `incus_command` where they exist, `command:` with `failed_when` and explicit state-checking elsewhere). no_log on the registration token. inventory/local.yml (new) Inventory for `bootstrap-r720.sh` — connection: local instead of SSH+become. Same group structure as staging.yml ; container groups use community.general.incus connection plugin (the local incus binary, no remote). inventory/{staging,prod}.yml (modified) Added `forgejo_runner` group (target of bootstrap_runner.yml phase 3, reached via community.general.incus from the host). scripts/bootstrap/bootstrap-local.sh (rewritten) Five phases : preflight, vault, forgejo, ansible, summary. Phase 4 calls a single `ansible-playbook` with both bootstrap_runner.yml + haproxy.yml in sequence. --ask-become-pass : ansible prompts ONCE for sudo, holds in memory, reuses for every become: true task. scripts/bootstrap/bootstrap-r720.sh (new) Symmetric to bootstrap-local.sh but runs as root on the R720. No SSH preflight, no --ask-become-pass (already root). Same Ansible playbooks, inventory/local.yml. scripts/bootstrap/verify-r720.sh (new — replaces verify-remote) Read-only checks of R720 state. Run as root locally on the R720. scripts/bootstrap/verify-local.sh (modified) Cross-host SSH check now fits the env-var-driven SSH_TARGET pattern (R720_USER may be empty if the alias has User=). scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh, verify-remote-ssh.sh} (DELETED) Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh. README.md (rewritten) Documents the parallel-script architecture, the no-NOPASSWD-sudo design choice (--ask-become-pass), each phase's needs, and a refreshed troubleshooting list. State files unchanged in shape : laptop : .git/talas-bootstrap/local.state R720 : /var/lib/talas/r720-bootstrap.state --no-verify justification continues to hold. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-30 13:12:26 +00:00
* Trigger 1st deploy manually : Forgejo Actions UI → Veza deploy → Run workflow.
* Once green, run `./enable-auto-deploy.sh` to restore the
`push:main + tag:v*` triggers.
* `verify-{local,r720}.sh` are safe to run any time.