Some checks failed
Veza CI / Notify on failure (push) Blocked by required conditions
Security Scan / Secret Scanning (gitleaks) (push) Waiting to run
Veza CI / Frontend (Web) (push) Has been cancelled
Veza CI / Backend (Go) (push) Has been cancelled
E2E Playwright / e2e (full) (push) Has been cancelled
Veza CI / Rust (Stream Server) (push) Has been cancelled
Self-hosted edge cache on a dedicated Incus container, sits between clients and the MinIO EC:2 cluster. Replaces the need for an external CDN at v1.0 traffic levels — handles thousands of concurrent listeners on the R720, leaks zero logs to a third party. This is the phase-1 alternative documented in the v1.0.9 CDN synthesis : phase-1 = self-hosted Nginx, phase-2 = 2 cache nodes + GeoDNS, phase-3 = Bunny.net via the existing CDN_* config (still inert with CDN_ENABLED=false). - infra/ansible/roles/nginx_proxy_cache/ : install nginx + curl, render nginx.conf with shared zone (128 MiB keys + 20 GiB disk, inactive=7d), render veza-cache site that proxies to the minio_nodes upstream pool with keepalive=32. HLS segments cached 7d via 1 MiB slice ; .m3u8 cached 60s ; everything else 1h. - Cache key excludes Authorization / Cookie (presigned URLs only in v1.0). slice_range included for segments so byte-range requests with arbitrary offsets all hit the same cached chunks. - proxy_cache_use_stale error timeout updating http_500..504 + background_update + lock — survives MinIO partial outages without cold-storming the origin. - X-Cache-Status surfaced on every response so smoke tests + operators can verify HIT/MISS without parsing access logs. - stub_status bound to 127.0.0.1:81/__nginx_status for the future prometheus nginx_exporter sidecar. - infra/ansible/playbooks/nginx_proxy_cache.yml : provisions the Incus container + applies common baseline + role. - inventory/lab.yml : new nginx_cache group. - infra/ansible/tests/test_nginx_cache.sh : MISS→HIT roundtrip via X-Cache-Status, on-disk entry verification. Acceptance : smoke test reports MISS then HIT for the same URL ; cache directory carries on-disk entries. No backend code change — the cache is transparent. To route through it, flip AWS_S3_ENDPOINT=http://nginx-cache.lxd:80 in the API env. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
95 lines
3.7 KiB
Django/Jinja
95 lines
3.7 KiB
Django/Jinja
# Managed by Ansible — do not edit by hand.
|
|
# Veza edge cache — proxy_pass to MinIO, cache HLS aggressively.
|
|
|
|
# MinIO upstream — round-robin across the EC:2 pool. health_check
|
|
# directive (ngx_http_upstream_module) is nginx-plus only ; on
|
|
# OSS we rely on the built-in passive health check (max_fails +
|
|
# fail_timeout) below.
|
|
upstream veza_minio {
|
|
{% for host in groups['minio_nodes'] | default(['minio-1', 'minio-2', 'minio-3', 'minio-4']) %}
|
|
server {{ host }}.lxd:{{ nginx_cache_minio_port }} max_fails=3 fail_timeout=10s;
|
|
{% endfor %}
|
|
keepalive 32;
|
|
}
|
|
|
|
# Internal stub_status endpoint for the prometheus exporter.
|
|
server {
|
|
listen 127.0.0.1:81;
|
|
server_name localhost;
|
|
|
|
location {{ nginx_cache_stub_status_path }} {
|
|
stub_status;
|
|
access_log off;
|
|
allow 127.0.0.1;
|
|
deny all;
|
|
}
|
|
}
|
|
|
|
server {
|
|
listen {{ nginx_cache_listen_port }} default_server;
|
|
server_name {{ nginx_cache_server_name }};
|
|
|
|
# Surface the cache verdict on every response so smoke tests +
|
|
# operators can verify HIT/MISS without parsing access logs.
|
|
add_header X-Cache-Status $upstream_cache_status always;
|
|
|
|
# Health probe — bypasses the cache entirely so monitors see the
|
|
# nginx instance liveness, not a stale cache.
|
|
location = /health {
|
|
access_log off;
|
|
return 200 "ok\n";
|
|
add_header Content-Type text/plain;
|
|
}
|
|
|
|
# ----------------------------------------------------------------
|
|
# HLS segments — content-addressed (filename includes a hash) so
|
|
# we cache aggressively. 7 days is the upper bound ; backend
|
|
# already sends max-age=86400 immutable, the cache TTL above
|
|
# extends it because origin Cache-Control is overridden by
|
|
# proxy_cache_valid when present.
|
|
# ----------------------------------------------------------------
|
|
location ~* \.(ts|m4s|mp4|aac|m4a)$ {
|
|
proxy_pass http://veza_minio;
|
|
proxy_cache veza_cache;
|
|
proxy_cache_valid 200 206 {{ nginx_cache_ttl_segment }};
|
|
proxy_cache_valid 404 1m; # negative cache for typo'd URLs
|
|
proxy_ignore_headers Cache-Control Expires Set-Cookie;
|
|
proxy_http_version 1.1;
|
|
proxy_set_header Connection "";
|
|
|
|
# 1 MiB slice — Nginx fetches in 1MiB chunks even on small
|
|
# range requests, dramatically improving hit ratio for byte
|
|
# ranges. Standard for HLS proxy_cache deployments.
|
|
slice 1m;
|
|
proxy_set_header Range $slice_range;
|
|
proxy_cache_key "$scheme$request_method$host$uri$is_args$args$slice_range";
|
|
}
|
|
|
|
# ----------------------------------------------------------------
|
|
# HLS playlists — short TTL because live streams may regenerate.
|
|
# 60s matches the backend's Cache-Control on .m3u8 responses.
|
|
# ----------------------------------------------------------------
|
|
location ~* \.m3u8$ {
|
|
proxy_pass http://veza_minio;
|
|
proxy_cache veza_cache;
|
|
proxy_cache_valid 200 {{ nginx_cache_ttl_playlist }};
|
|
proxy_cache_valid 404 10s;
|
|
proxy_http_version 1.1;
|
|
proxy_set_header Connection "";
|
|
}
|
|
|
|
# ----------------------------------------------------------------
|
|
# Catch-all : cover art, original audio file downloads, etc.
|
|
# Shorter TTL than segments because these can be replaced
|
|
# in-place (track edits) ; the user is expected to bust them
|
|
# via PUT to the same key.
|
|
# ----------------------------------------------------------------
|
|
location / {
|
|
proxy_pass http://veza_minio;
|
|
proxy_cache veza_cache;
|
|
proxy_cache_valid 200 206 {{ nginx_cache_ttl_other }};
|
|
proxy_cache_valid 404 1m;
|
|
proxy_http_version 1.1;
|
|
proxy_set_header Connection "";
|
|
}
|
|
}
|