veza/k8s
senke 2aea1af361 docs(J2): align docs with reality — rewrite CLAUDE.md, fix README, purge chat-server refs
Completes Day 2 of the v1.0.3 → v1.0.4 cleanup sprint. The documentation
now describes the actual repo layout instead of a fictional one.

CLAUDE.md — complete rewrite
  Old version referenced paths that don't exist and a protocol aimed at
  implementing v0.11.0 (current tag: v1.0.3). The agent was following a
  map for a city that had been rebuilt.
  - backend/        → veza-backend-api/
  - frontend/       → apps/web/
  - ORIGIN/ (root)  → veza-docs/ORIGIN/
  - veza-chat-server → merged into backend-api (v0.502, commit 279a10d31)
  - apps/desktop/   → never existed
  Also refreshed: stack versions (Go 1.25, Vite 5, React 18.2, Axum 0.8),
  commands, conventions, hook bypasses (SKIP_TYPES/SKIP_TESTS/SKIP_E2E),
  scope rules kept as immutable (no AI/ML, no Web3, no gamification, no
  dark patterns, no public popularity metrics).

README.md — targeted fixes
  - "Version cible: v0.101" → "Version courante: v1.0.4"
  - "Development Setup (v0.9.3)" → "Development Setup"
  - Removed Desktop (Electron) section — never implemented
  - Removed veza-chat-server from structure — merged into backend
  - Removed deprecated compose files section (nothing is DEPRECATED now)

k8s runbooks — remove stale chat-server references
  The disaster-recovery runbooks still scaled/restarted a deployment
  that no longer exists. In a real failover these commands would have
  failed silently and blocked the procedure. Files patched:
    - k8s/disaster-recovery/runbooks/cluster-failover.md
    - k8s/disaster-recovery/runbooks/data-restore.md
    - k8s/disaster-recovery/runbooks/database-failover.md
    - k8s/disaster-recovery/runbooks/rollback-procedure.md
    - k8s/network-policies/README.md
    - k8s/secrets/README.md
    - k8s/secrets.yaml.example
  Each reference is replaced by a short inline note pointing to v0.502
  (commit 279a10d31) so future readers understand the history.

.env.example — remove CHAT_JWT_SECRET
  Legacy env var for the deleted chat server. Replaced by an explanatory
  comment.

Not in this commit (user handles on Forgejo):
  - Closing the 5 open dependabot PRs on veza-chat-server/* branches
  - Deleting those 5 remote branches after the PRs are closed

Refs: AUDIT_REPORT.md §5.1, §7.1, §10 P1, §10 P4
2026-04-14 17:23:50 +02:00
..
autoscaling v0.9.5 2026-03-06 10:02:53 +01:00
backend-api [INFRA-003] infra: Set up Kubernetes deployment 2025-12-25 21:32:07 +01:00
backups [INFRA-005] infra: Set up database backups 2025-12-25 21:33:44 +01:00
cdn [INFRA-007] infra: Set up CDN configuration 2025-12-25 21:35:52 +01:00
certificates [INFRA-006] infra: Set up SSL/TLS certificates 2025-12-25 21:34:39 +01:00
disaster-recovery docs(J2): align docs with reality — rewrite CLAUDE.md, fix README, purge chat-server refs 2026-04-14 17:23:50 +02:00
environments release(v0.903): Vault - ORDER BY whitelist, rate limiter, VERSION sync, chat-server cleanup, Go 1.24 2026-02-27 09:43:25 +01:00
frontend [INFRA-003] infra: Set up Kubernetes deployment 2025-12-25 21:32:07 +01:00
load-balancing v0.9.5 2026-03-06 10:02:53 +01:00
monitoring release(v0.903): Vault - ORDER BY whitelist, rate limiter, VERSION sync, chat-server cleanup, Go 1.24 2026-02-27 09:43:25 +01:00
network-policies docs(J2): align docs with reality — rewrite CLAUDE.md, fix README, purge chat-server refs 2026-04-14 17:23:50 +02:00
secrets docs(J2): align docs with reality — rewrite CLAUDE.md, fix README, purge chat-server refs 2026-04-14 17:23:50 +02:00
configmap.yaml [INFRA-003] infra: Set up Kubernetes deployment 2025-12-25 21:32:07 +01:00
ingress.yaml [INFRA-006] infra: Set up SSL/TLS certificates 2025-12-25 21:34:39 +01:00
namespace.yaml [INFRA-003] infra: Set up Kubernetes deployment 2025-12-25 21:32:07 +01:00
README.md release(v0.903): Vault - ORDER BY whitelist, rate limiter, VERSION sync, chat-server cleanup, Go 1.24 2026-02-27 09:43:25 +01:00
secrets.yaml.example docs(J2): align docs with reality — rewrite CLAUDE.md, fix README, purge chat-server refs 2026-04-14 17:23:50 +02:00

Kubernetes Deployment Manifests

This directory contains Kubernetes manifests for deploying Veza Platform to production.

Structure

k8s/
├── namespace.yaml              # Namespace definition
├── configmap.yaml              # Configuration values
├── secrets.yaml.example        # Example secrets (DO NOT COMMIT REAL SECRETS)
├── ingress.yaml                # Ingress configuration
├── backend-api/
│   ├── deployment.yaml         # Backend API deployment
│   └── service.yaml            # Backend API service
├── frontend/
│   ├── deployment.yaml         # Frontend deployment
│   └── service.yaml            # Frontend service
└── stream-server/
    └── (see veza-stream-server/k8s/production/)

Prerequisites

  • Kubernetes cluster 1.24+
  • kubectl configured
  • Docker images built and pushed to registry
  • Secrets configured (see secrets.yaml.example)

Deployment Steps

1. Create Namespace

kubectl apply -f k8s/namespace.yaml

2. Create Secrets

# Copy example and fill in real values
cp k8s/secrets.yaml.example k8s/secrets.yaml
# Edit secrets.yaml with real values
kubectl create secret generic veza-secrets \
  --from-env-file=k8s/secrets.yaml \
  -n veza-production

3. Create ConfigMap

kubectl apply -f k8s/configmap.yaml

4. Deploy Services

# Backend API
kubectl apply -f k8s/backend-api/

# Frontend
kubectl apply -f k8s/frontend/

# Stream Server (if separate)
kubectl apply -f veza-stream-server/k8s/production/

5. Create Ingress

kubectl apply -f k8s/ingress.yaml

Verification

# Check pods
kubectl get pods -n veza-production

# Check services
kubectl get svc -n veza-production

# Check ingress
kubectl get ingress -n veza-production

# View logs
kubectl logs -f deployment/veza-backend-api -n veza-production

Scaling

# Scale backend API
kubectl scale deployment veza-backend-api --replicas=5 -n veza-production

# Scale frontend
kubectl scale deployment veza-frontend --replicas=3 -n veza-production

Updates

# Update image
kubectl set image deployment/veza-backend-api \
  backend-api=veza-backend-api:v1.1.0 \
  -n veza-production

# Rollout status
kubectl rollout status deployment/veza-backend-api -n veza-production

Troubleshooting

# Describe pod
kubectl describe pod <pod-name> -n veza-production

# Get events
kubectl get events -n veza-production --sort-by='.lastTimestamp'

# Port forward for debugging
kubectl port-forward deployment/veza-backend-api 8080:8080 -n veza-production