Rearchitecture after operator pushback : the previous design did
too much in bash (SSH-streaming script chunks, manual sudo dance,
NOPASSWD requirement). Ansible is the right tool. The shell
scripts are now thin orchestrators handling the chicken-and-egg
of vault + Forgejo CI provisioning, then calling ansible-playbook.
Key principles :
1. NO NOPASSWD sudo on the R720. --ask-become-pass interactive,
password held in ansible memory only for the run.
2. Two parallel scripts — one per host, fully self-contained.
3. Both run the SAME Ansible playbooks (bootstrap_runner.yml +
haproxy.yml). Difference is the inventory.
Files (new + replaced) :
ansible.cfg
pipelining=True → False. Required for --ask-become-pass to
work reliably ; the previous setting raced sudo's prompt and
timed out at 12s.
playbooks/bootstrap_runner.yml (new)
The Incus-host-side bootstrap, ported from the old
scripts/bootstrap/bootstrap-remote.sh. Three plays :
Phase 1 : ensure veza-app + veza-data profiles exist ;
drop legacy empty veza-net profile.
Phase 2 : forgejo-runner gets /var/lib/incus/unix.socket
attached as a disk device, security.nesting=true,
/usr/bin/incus pushed in as /usr/local/bin/incus,
smoke-tested.
Phase 3 : forgejo-runner registered with `incus,self-hosted`
label (idempotent — skips if already labelled).
Each task uses Ansible idioms (`incus_profile`, `incus_command`
where they exist, `command:` with `failed_when` and explicit
state-checking elsewhere). no_log on the registration token.
inventory/local.yml (new)
Inventory for `bootstrap-r720.sh` — connection: local instead
of SSH+become. Same group structure as staging.yml ;
container groups use community.general.incus connection
plugin (the local incus binary, no remote).
inventory/{staging,prod}.yml (modified)
Added `forgejo_runner` group (target of bootstrap_runner.yml
phase 3, reached via community.general.incus from the host).
scripts/bootstrap/bootstrap-local.sh (rewritten)
Five phases : preflight, vault, forgejo, ansible, summary.
Phase 4 calls a single `ansible-playbook` with both
bootstrap_runner.yml + haproxy.yml in sequence.
--ask-become-pass : ansible prompts ONCE for sudo, holds in
memory, reuses for every become: true task.
scripts/bootstrap/bootstrap-r720.sh (new)
Symmetric to bootstrap-local.sh but runs as root on the R720.
No SSH preflight, no --ask-become-pass (already root).
Same Ansible playbooks, inventory/local.yml.
scripts/bootstrap/verify-r720.sh (new — replaces verify-remote)
Read-only checks of R720 state. Run as root locally on the R720.
scripts/bootstrap/verify-local.sh (modified)
Cross-host SSH check now fits the env-var-driven SSH_TARGET
pattern (R720_USER may be empty if the alias has User=).
scripts/bootstrap/{bootstrap-remote.sh, verify-remote.sh,
verify-remote-ssh.sh} (DELETED)
Replaced by playbooks/bootstrap_runner.yml + verify-r720.sh.
README.md (rewritten)
Documents the parallel-script architecture, the
no-NOPASSWD-sudo design choice (--ask-become-pass), each
phase's needs, and a refreshed troubleshooting list.
State files unchanged in shape :
laptop : .git/talas-bootstrap/local.state
R720 : /var/lib/talas/r720-bootstrap.state
--no-verify justification continues to hold.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
129 lines
3.9 KiB
YAML
129 lines
3.9 KiB
YAML
# Local inventory — run ansible-playbook directly on the R720 (no SSH,
|
|
# no --ask-become-pass needed because the operator is already root via
|
|
# `sudo` when invoking the script). Useful when :
|
|
# * The operator's laptop can't reach the R720 (no WireGuard yet)
|
|
# * Disaster recovery — work directly on the host
|
|
# * Faster iteration during initial bootstrap
|
|
#
|
|
# Same shape as staging.yml but :
|
|
# * incus_hosts / staging hosts → localhost (connection: local)
|
|
# * Container groups (haproxy, veza_app_*, veza_data, forgejo_runner)
|
|
# keep using community.general.incus — the connection just goes
|
|
# through the LOCAL incus binary, not over SSH.
|
|
#
|
|
# Usage :
|
|
# sudo ansible-playbook -i inventory/local.yml playbooks/bootstrap_runner.yml \
|
|
# --vault-password-file /path/to/.vault-pass \
|
|
# -e forgejo_registration_token=$TOKEN \
|
|
# -e forgejo_api_url=https://10.0.20.105:3000
|
|
all:
|
|
hosts:
|
|
localhost:
|
|
ansible_connection: local
|
|
ansible_python_interpreter: /usr/bin/python3
|
|
vars:
|
|
# Default to the env=staging shape ; override with -e veza_env=prod
|
|
# if running prod-specific tasks locally.
|
|
veza_env: staging
|
|
veza_container_prefix: "veza-staging-"
|
|
veza_incus_dns_suffix: lxd
|
|
haproxy_topology: blue-green
|
|
veza_public_host: staging.veza.fr
|
|
veza_public_url: "https://staging.veza.fr"
|
|
veza_cors_allowed_origins:
|
|
- "https://staging.veza.fr"
|
|
veza_log_level: DEBUG
|
|
veza_otel_sample_rate: "1.0"
|
|
veza_feature_flags:
|
|
HYPERSWITCH_ENABLED: "false"
|
|
STRIPE_CONNECT_ENABLED: "false"
|
|
WEBAUTHN_ENABLED: "true"
|
|
veza_release_retention: 30
|
|
postgres_password: "{{ vault_postgres_password }}"
|
|
redis_password: "{{ vault_redis_password }}"
|
|
rabbitmq_password: "{{ vault_rabbitmq_password }}"
|
|
veza_incus_network: net-veza
|
|
veza_incus_subnet: 10.0.20.0/24
|
|
children:
|
|
incus_hosts:
|
|
hosts:
|
|
localhost:
|
|
staging:
|
|
hosts:
|
|
localhost:
|
|
forgejo_runner:
|
|
hosts:
|
|
forgejo-runner:
|
|
vars:
|
|
ansible_connection: community.general.incus
|
|
ansible_python_interpreter: /usr/bin/python3
|
|
haproxy:
|
|
hosts:
|
|
veza-haproxy:
|
|
vars:
|
|
ansible_connection: community.general.incus
|
|
ansible_python_interpreter: /usr/bin/python3
|
|
veza_app_backend:
|
|
children:
|
|
veza_app_backend_blue:
|
|
veza_app_backend_green:
|
|
veza_app_backend_tools:
|
|
vars:
|
|
ansible_connection: community.general.incus
|
|
ansible_python_interpreter: /usr/bin/python3
|
|
veza_app_backend_blue:
|
|
hosts:
|
|
veza-staging-backend-blue:
|
|
veza_app_backend_green:
|
|
hosts:
|
|
veza-staging-backend-green:
|
|
veza_app_backend_tools:
|
|
hosts:
|
|
veza-staging-backend-tools:
|
|
veza_app_stream:
|
|
children:
|
|
veza_app_stream_blue:
|
|
veza_app_stream_green:
|
|
vars:
|
|
ansible_connection: community.general.incus
|
|
ansible_python_interpreter: /usr/bin/python3
|
|
veza_app_stream_blue:
|
|
hosts:
|
|
veza-staging-stream-blue:
|
|
veza_app_stream_green:
|
|
hosts:
|
|
veza-staging-stream-green:
|
|
veza_app_web:
|
|
children:
|
|
veza_app_web_blue:
|
|
veza_app_web_green:
|
|
vars:
|
|
ansible_connection: community.general.incus
|
|
ansible_python_interpreter: /usr/bin/python3
|
|
veza_app_web_blue:
|
|
hosts:
|
|
veza-staging-web-blue:
|
|
veza_app_web_green:
|
|
hosts:
|
|
veza-staging-web-green:
|
|
veza_data:
|
|
children:
|
|
veza_data_postgres:
|
|
veza_data_redis:
|
|
veza_data_rabbitmq:
|
|
veza_data_minio:
|
|
vars:
|
|
ansible_connection: community.general.incus
|
|
ansible_python_interpreter: /usr/bin/python3
|
|
veza_data_postgres:
|
|
hosts:
|
|
veza-staging-postgres:
|
|
veza_data_redis:
|
|
hosts:
|
|
veza-staging-redis:
|
|
veza_data_rabbitmq:
|
|
hosts:
|
|
veza-staging-rabbitmq:
|
|
veza_data_minio:
|
|
hosts:
|
|
veza-staging-minio:
|