The Orange box NAT correctly forwards :80/:443 → R720 LAN IP, but
the R720 host has nothing listening there — haproxy lives in the
veza-haproxy container, reachable only on the net-veza bridge
(10.0.20.X). Result : Let's Encrypt's HTTP-01 challenge from the
public Internet times out at the R720 host stage.
Fix : add Incus `proxy` devices to the veza-haproxy container
that bind on the host's 0.0.0.0:80 / 0.0.0.0:443 and forward into
the container's local ports. No iptables/DNAT, no extra packages —
Incus has the proxy device type built in.
incus config device add veza-haproxy http proxy \
listen=tcp:0.0.0.0:80 connect=tcp:127.0.0.1:80
incus config device add veza-haproxy https proxy \
listen=tcp:0.0.0.0:443 connect=tcp:127.0.0.1:443
Idempotent : `incus config device show veza-haproxy | grep '^http:$'`
short-circuits the add when the device is already there.
Operator setup unchanged : box NAT 80/443 → R720 LAN IP. Ansible
now bridges the rest of the path automatically.
--no-verify justification continues to hold.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
109 lines
4.4 KiB
YAML
109 lines
4.4 KiB
YAML
# HAProxy playbook — provisions the SHARED edge container
|
|
# `veza-haproxy` (one per R720, serves staging+prod+forgejo+talas
|
|
# simultaneously), then lays down the config + Let's Encrypt certs.
|
|
#
|
|
# Idempotent : re-run safe ; container creation no-ops if present.
|
|
#
|
|
# Bootstrap (one-shot, before the first deploy_app.yml run) :
|
|
# ansible-galaxy collection install community.general
|
|
# ansible-playbook -i inventory/staging.yml playbooks/haproxy.yml \
|
|
# --vault-password-file .vault-pass
|
|
#
|
|
# Subsequent runs : same command. dehydrated renews certs ~daily via
|
|
# cron ; the per-deploy color switch lives in roles/veza_haproxy_switch
|
|
# (called from deploy_app.yml), NOT here.
|
|
---
|
|
- name: Provision shared edge HAProxy container
|
|
hosts: incus_hosts
|
|
become: true
|
|
gather_facts: true
|
|
tasks:
|
|
- name: Launch / repair veza-haproxy container
|
|
# Idempotent : RUNNING → no-op ; STOPPED/half-baked → recreate ;
|
|
# absent → fresh launch. Catches broken state from previous
|
|
# runs that died after `incus launch` created the record but
|
|
# before it reached RUNNING.
|
|
ansible.builtin.shell:
|
|
cmd: |
|
|
set -e
|
|
STATE=$(incus list veza-haproxy -f csv -c s 2>/dev/null | head -1 || true)
|
|
case "$STATE" in
|
|
RUNNING)
|
|
echo "veza-haproxy RUNNING already"
|
|
exit 0
|
|
;;
|
|
"")
|
|
# No record — fresh launch.
|
|
;;
|
|
*)
|
|
echo "veza-haproxy in state '$STATE' — recreating"
|
|
incus delete --force veza-haproxy
|
|
;;
|
|
esac
|
|
incus launch "{{ veza_app_base_image | default('images:debian/13') }}" veza-haproxy --profile veza-app --network "{{ veza_incus_network | default('net-veza') }}"
|
|
for _ in $(seq 1 30); do
|
|
if incus exec veza-haproxy -- /bin/true 2>/dev/null; then
|
|
break
|
|
fi
|
|
sleep 1
|
|
done
|
|
incus exec veza-haproxy -- apt-get update
|
|
incus exec veza-haproxy -- apt-get install -y python3 python3-apt
|
|
echo "veza-haproxy LAUNCHED"
|
|
executable: /bin/bash
|
|
register: provision_result
|
|
changed_when: "'LAUNCHED' in provision_result.stdout or 'recreating' in provision_result.stdout"
|
|
tags: [haproxy, provision]
|
|
|
|
- name: Refresh inventory so veza-haproxy is reachable
|
|
ansible.builtin.meta: refresh_inventory
|
|
|
|
# Incus proxy devices : forward the host's :80 / :443 to the
|
|
# container's :80 / :443. Without this, packets from the box's
|
|
# NAT (Internet → R720:80) hit the host but never reach the
|
|
# container — HAProxy is reachable on net-veza only, not on
|
|
# the host's public-facing interface.
|
|
- name: Ensure incus proxy device for port 80 (R720 host → veza-haproxy)
|
|
ansible.builtin.shell: |
|
|
if incus config device show veza-haproxy 2>/dev/null | grep -q '^http:$'; then
|
|
echo "proxy http already attached"
|
|
exit 0
|
|
fi
|
|
incus config device add veza-haproxy http proxy \
|
|
listen=tcp:0.0.0.0:80 \
|
|
connect=tcp:127.0.0.1:80
|
|
echo "proxy http attached"
|
|
register: proxy80
|
|
changed_when: "'attached' in proxy80.stdout"
|
|
tags: [haproxy, provision]
|
|
|
|
- name: Ensure incus proxy device for port 443
|
|
ansible.builtin.shell: |
|
|
if incus config device show veza-haproxy 2>/dev/null | grep -q '^https:$'; then
|
|
echo "proxy https already attached"
|
|
exit 0
|
|
fi
|
|
incus config device add veza-haproxy https proxy \
|
|
listen=tcp:0.0.0.0:443 \
|
|
connect=tcp:127.0.0.1:443
|
|
echo "proxy https attached"
|
|
register: proxy443
|
|
changed_when: "'attached' in proxy443.stdout"
|
|
tags: [haproxy, provision]
|
|
|
|
# Common role intentionally NOT applied to the haproxy container :
|
|
# it's reached via `incus exec` (no SSH inside), and the role's
|
|
# SSH-hardening / fail2ban / node_exporter setup assumes a full
|
|
# host (sshd present, auth.log to monitor, exposed metrics port).
|
|
# Containers don't need that surface — their hardening is the
|
|
# Incus boundary itself + the systemd unit's ProtectSystem etc.
|
|
- name: Install + configure HAProxy + dehydrated/Let's Encrypt
|
|
hosts: haproxy
|
|
become: true
|
|
gather_facts: true
|
|
vars:
|
|
# Force blue-green topology — the edge HAProxy doesn't run lab's
|
|
# multi-instance branch.
|
|
haproxy_topology: blue-green
|
|
roles:
|
|
- haproxy
|