Some checks failed
Veza CI / Notify on failure (push) Blocked by required conditions
Security Scan / Secret Scanning (gitleaks) (push) Waiting to run
Veza CI / Frontend (Web) (push) Has been cancelled
Veza CI / Backend (Go) (push) Has been cancelled
E2E Playwright / e2e (full) (push) Has been cancelled
Veza CI / Rust (Stream Server) (push) Has been cancelled
Self-hosted edge cache on a dedicated Incus container, sits between clients and the MinIO EC:2 cluster. Replaces the need for an external CDN at v1.0 traffic levels — handles thousands of concurrent listeners on the R720, leaks zero logs to a third party. This is the phase-1 alternative documented in the v1.0.9 CDN synthesis : phase-1 = self-hosted Nginx, phase-2 = 2 cache nodes + GeoDNS, phase-3 = Bunny.net via the existing CDN_* config (still inert with CDN_ENABLED=false). - infra/ansible/roles/nginx_proxy_cache/ : install nginx + curl, render nginx.conf with shared zone (128 MiB keys + 20 GiB disk, inactive=7d), render veza-cache site that proxies to the minio_nodes upstream pool with keepalive=32. HLS segments cached 7d via 1 MiB slice ; .m3u8 cached 60s ; everything else 1h. - Cache key excludes Authorization / Cookie (presigned URLs only in v1.0). slice_range included for segments so byte-range requests with arbitrary offsets all hit the same cached chunks. - proxy_cache_use_stale error timeout updating http_500..504 + background_update + lock — survives MinIO partial outages without cold-storming the origin. - X-Cache-Status surfaced on every response so smoke tests + operators can verify HIT/MISS without parsing access logs. - stub_status bound to 127.0.0.1:81/__nginx_status for the future prometheus nginx_exporter sidecar. - infra/ansible/playbooks/nginx_proxy_cache.yml : provisions the Incus container + applies common baseline + role. - inventory/lab.yml : new nginx_cache group. - infra/ansible/tests/test_nginx_cache.sh : MISS→HIT roundtrip via X-Cache-Status, on-disk entry verification. Acceptance : smoke test reports MISS then HIT for the same URL ; cache directory carries on-disk entries. No backend code change — the cache is transparent. To route through it, flip AWS_S3_ENDPOINT=http://nginx-cache.lxd:80 in the API env. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
87 lines
3.6 KiB
Django/Jinja
87 lines
3.6 KiB
Django/Jinja
# Managed by Ansible — do not edit by hand.
|
|
# Top-level nginx config tuned for proxy_cache duty in front of MinIO.
|
|
# Site-specific config lives in sites-enabled/veza-cache.conf.
|
|
|
|
user www-data;
|
|
worker_processes {{ nginx_cache_worker_processes }};
|
|
worker_rlimit_nofile 65535;
|
|
pid /run/nginx.pid;
|
|
|
|
events {
|
|
worker_connections {{ nginx_cache_worker_connections }};
|
|
multi_accept on;
|
|
}
|
|
|
|
http {
|
|
sendfile on;
|
|
tcp_nopush on;
|
|
tcp_nodelay on;
|
|
keepalive_timeout 65;
|
|
server_tokens off;
|
|
client_max_body_size 0; # streaming proxy — no enforced upload cap
|
|
proxy_buffering on;
|
|
proxy_request_buffering off;
|
|
types_hash_max_size 2048;
|
|
include /etc/nginx/mime.types;
|
|
default_type application/octet-stream;
|
|
|
|
# ----------------------------------------------------------------
|
|
# Cache zones. Single shared zone for all object types ; segments
|
|
# and playlists are differentiated by Cache-Control / proxy_cache_valid
|
|
# rules in the site config, not by separate zones (saves keys_zone
|
|
# memory + simplifies invalidation).
|
|
# ----------------------------------------------------------------
|
|
proxy_cache_path {{ nginx_cache_root }}
|
|
levels={{ nginx_cache_levels }}
|
|
keys_zone=veza_cache:128m
|
|
max_size={{ nginx_cache_max_size }}
|
|
inactive={{ nginx_cache_inactive }}
|
|
use_temp_path=off;
|
|
|
|
# The cache key. We deliberately exclude Authorization/Cookie
|
|
# because these requests are presigned URLs (signature in query
|
|
# string, no auth headers). If we ever cache authenticated traffic
|
|
# we'd need to add `$http_authorization` here — but never combine
|
|
# with `$cookie_session` or we leak per-user objects across users.
|
|
proxy_cache_key "$scheme$request_method$host$uri$is_args$args";
|
|
|
|
# Honor stale entries while the origin is unhealthy / updating.
|
|
proxy_cache_use_stale error timeout updating
|
|
http_500 http_502 http_503 http_504;
|
|
proxy_cache_background_update on;
|
|
proxy_cache_lock on;
|
|
proxy_cache_lock_timeout 5s;
|
|
proxy_cache_revalidate on;
|
|
|
|
# Pass Range through to MinIO so byte-range requests are honored.
|
|
# Range responses are cached as 206 Partial Content, which Nginx
|
|
# handles correctly when slice module is enabled (we turn it on
|
|
# in the site block when needed).
|
|
proxy_set_header Host $host;
|
|
proxy_set_header X-Real-IP $remote_addr;
|
|
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
|
proxy_set_header X-Forwarded-Proto $scheme;
|
|
proxy_set_header Range $http_range;
|
|
proxy_set_header If-Range $http_if_range;
|
|
|
|
# Logs — compact format with HIT/MISS/STALE so dashboards can
|
|
# compute hit ratio without a separate exporter.
|
|
log_format veza_cache '$remote_addr - $remote_user [$time_iso8601] '
|
|
'"$request" $status $body_bytes_sent '
|
|
'"$http_referer" "$http_user_agent" '
|
|
'cache=$upstream_cache_status rt=$request_time '
|
|
'urt=$upstream_response_time';
|
|
access_log /var/log/nginx/veza-cache.access.log veza_cache;
|
|
error_log /var/log/nginx/veza-cache.error.log warn;
|
|
|
|
gzip on;
|
|
gzip_vary on;
|
|
gzip_proxied any;
|
|
gzip_types application/vnd.apple.mpegurl text/css application/javascript
|
|
application/json text/plain text/xml;
|
|
# Audio segments are already compressed (mpeg-ts, fmp4) — don't
|
|
# waste CPU re-gzipping them.
|
|
|
|
include /etc/nginx/conf.d/*.conf;
|
|
include /etc/nginx/sites-enabled/*;
|
|
}
|