veza/veza-stream-server/docker-compose.yml
senke 113210734c chore(infra): J6 — mark 3 dormant docker-compose files as deprecated
Audit cross-checked against active composes shows three dormant compose
files that duplicate functionality already covered by the canonical
docker-compose.{,dev,prod,staging,test}.yml at the repo root. None are
referenced from Make targets, scripts, or CI workflows. They have
diverged from the active set (different ports, older Postgres version,
no shared volume names, etc.) and are a footgun for new contributors.

Files marked DEPRECATED with a header pointing at the canonical compose
to use instead:

  veza-stream-server/docker-compose.yml
    Standalone stream-server compose. Same service is provided by the
    root docker-compose.yml under the `docker-dev` profile.

  infra/docker-compose.lab.yml
    Lab Postgres on default port 5432. Conflicts with a host Postgres on
    most setups; root docker-compose.dev.yml uses non-default ports for
    a reason.

  config/docker/docker-compose.local.yml
    Local Postgres 15 variant on port 5433. Redundant with root
    docker-compose.dev.yml (Postgres 16, project-wide port mapping).

Not in this commit (intentionally limited J6 scope, per audit plan
"verify, don't refactor"):

  - No `extends:` consolidation across the active composes — that is a
    1-2 day refactor on its own and not a v1.0.4 concern.
  - The five active composes were syntactically validated locally
    (docker compose config); production and staging both require
    operator-injected env vars (DB_PASS, S3_*, RABBITMQ_PASS, etc.)
    which is the intended behavior, not a bug.
  - Cross-compose audit confirms zero references to the removed
    chat-server or any other dead service / image. Only one residual
    deprecation warning across all active composes: the obsolete
    `version:` field on docker-compose.{prod,test,test}.yml — cosmetic,
    not blocking.
  - Test suite verification (Go / Rust / Vitest) deferred to Forgejo CI
    rather than re-running locally. The pre-push hook + remote pipeline
    will gate the next push.

Follow-up candidates (not blocking v1.0.4):
  - Delete the three deprecated files once a 2-month grace period
    confirms no local dev workflow references them.
  - Drop the obsolete `version:` field across the active composes.

Refs: AUDIT_REPORT.md §6.1, §10 P7
2026-04-15 12:58:39 +02:00

155 lines
4.7 KiB
YAML

# ============================================================================
# DEPRECATED — standalone stream-server compose.
# Use docker-compose.yml (root) with the `docker-dev` profile instead:
# docker compose --profile docker-dev up -d stream-server
# Marked in v1.0.4 cleanup. Candidate for deletion once no local dev
# reference is found (2+ month grace period).
# ============================================================================
version: "3.8"
services:
stream-server:
build:
context: .
dockerfile: Dockerfile
args:
BUILD_TIME: ${BUILD_TIME:-$(date -u +"%Y-%m-%dT%H:%M:%SZ")}
RUST_VERSION: ${RUST_VERSION:-$(rustc --version)}
container_name: stream-server
restart: unless-stopped
# Configuration réseau
ports:
- "${HOST_PORT:-8082}:8082"
# Variables d'environnement
environment:
- SECRET_KEY=${SECRET_KEY}
- STREAM_SERVER_PORT=8082
- AUDIO_DIR=/app/audio
- ALLOWED_ORIGINS=${ALLOWED_ORIGINS:-http://localhost:5173}
- MAX_FILE_SIZE=${MAX_FILE_SIZE:-104857600}
- MAX_RANGE_SIZE=${MAX_RANGE_SIZE:-10485760}
- SIGNATURE_TOLERANCE=${SIGNATURE_TOLERANCE:-60}
- RUST_LOG=${RUST_LOG:-stream_server=info}
- ADMIN_TOKEN=${ADMIN_TOKEN:-}
# Montage des volumes
volumes:
- ./audio:/app/audio:ro
- ./logs:/app/logs:rw
- stream_server_cache:/tmp
# Limitations de ressources
deploy:
resources:
limits:
memory: 512M
cpus: "1.0"
reservations:
memory: 128M
cpus: "0.25"
# Configuration de sécurité
security_opt:
- no-new-privileges:true
cap_drop:
- ALL
cap_add:
- CHOWN
- SETGID
- SETUID
read_only: true
tmpfs:
- /tmp:rw,noexec,nosuid,size=100m
# Health check
healthcheck:
test: ["CMD", "/usr/local/bin/healthcheck.sh"]
interval: 30s
timeout: 10s
retries: 3
start_period: 10s
# Réseau
networks:
- stream_network
# Proxy inverse (optionnel)
nginx:
image: nginx:alpine
container_name: stream-nginx
restart: unless-stopped
depends_on:
- stream-server
ports:
- "${NGINX_PORT:-80}:80"
- "${NGINX_SSL_PORT:-443}:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
- ./ssl:/etc/nginx/ssl:ro
- nginx_cache:/var/cache/nginx
networks:
- stream_network
profiles:
- with-proxy
# Monitoring avec Prometheus (optionnel)
prometheus:
image: prom/prometheus:latest
container_name: stream-prometheus
restart: unless-stopped
ports:
- "${PROMETHEUS_PORT:-9090}:9090"
volumes:
- ./monitoring/prometheus.yml:/etc/prometheus/prometheus.yml:ro
- prometheus_data:/prometheus
command:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--web.console.libraries=/etc/prometheus/console_libraries"
- "--web.console.templates=/etc/prometheus/consoles"
- "--storage.tsdb.retention.time=200h"
- "--web.enable-lifecycle"
networks:
- stream_network
profiles:
- monitoring
# Grafana pour la visualisation (optionnel)
grafana:
image: grafana/grafana:latest
container_name: stream-grafana
restart: unless-stopped
depends_on:
- prometheus
ports:
- "${GRAFANA_PORT:-3000}:3000"
environment:
- GF_SECURITY_ADMIN_PASSWORD=${GRAFANA_PASSWORD:-admin}
- GF_USERS_ALLOW_SIGN_UP=false
volumes:
- grafana_data:/var/lib/grafana
- ./monitoring/grafana/dashboards:/etc/grafana/provisioning/dashboards:ro
- ./monitoring/grafana/datasources:/etc/grafana/provisioning/datasources:ro
networks:
- stream_network
profiles:
- monitoring
networks:
stream_network:
driver: bridge
ipam:
config:
- subnet: 172.20.0.0/16
volumes:
stream_server_cache:
driver: local
nginx_cache:
driver: local
prometheus_data:
driver: local
grafana_data:
driver: local