release(v0.903): Vault - ORDER BY whitelist, rate limiter, VERSION sync, chat-server cleanup, Go 1.24
Some checks failed
Backend API CI / test-unit (push) Failing after 0s
Backend API CI / test-integration (push) Failing after 0s
Frontend CI / test (push) Failing after 0s
Storybook Audit / Build & audit Storybook (push) Failing after 0s
Stream Server CI / test (push) Failing after 0s

- ORDER BY dynamiques : whitelist explicite, fallback created_at DESC
- Login/register soumis au rate limiter global
- VERSION sync + check CI
- Nettoyage références veza-chat-server
- Go 1.24 partout (Dockerfile, workflows)
- TODO/FIXME/HACK convertis en issues ou résolus
This commit is contained in:
senke 2026-02-27 09:43:25 +01:00
parent 6823e5a30d
commit f9120c322b
51 changed files with 139 additions and 280 deletions

View file

@ -13,6 +13,20 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Check VERSION matches git tag
run: |
current_tag=$(git describe --tags --exact-match 2>/dev/null || true)
if [ -n "$current_tag" ]; then
version_file=$(cat VERSION)
tag_version=${current_tag#v}
if [ "$version_file" != "$tag_version" ]; then
echo "VERSION mismatch: VERSION=$version_file, current tag=$current_tag"
exit 1
fi
fi
- name: Set up Node - name: Set up Node
uses: actions/setup-node@v4 uses: actions/setup-node@v4

View file

@ -26,7 +26,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: "1.23" go-version: "1.24"
cache: true cache: true
- name: Run migrations - name: Run migrations

View file

@ -70,7 +70,7 @@ Exemples :
- `feat: add adaptive HLS transcoding worker` - `feat: add adaptive HLS transcoding worker`
- `fix: correct JWT user_id mismatch between Go and Rust` - `fix: correct JWT user_id mismatch between Go and Rust`
- `refactor: isolate DM module in chat-server` - `refactor: isolate DM module in stream-server`
--- ---

View file

@ -34,7 +34,7 @@ include make/help.mk
# Add new services in make/config.mk (SERVICES, SERVICE_DIR_*, PORT_*). # Add new services in make/config.mk (SERVICES, SERVICE_DIR_*, PORT_*).
# ============================================================================== # ==============================================================================
.PHONY: dev-web dev-backend-api dev-chat-server dev-stream-server .PHONY: dev-web dev-backend-api dev-stream-server
.PHONY: test-web test-backend-api test-chat-server test-stream-server .PHONY: test-web test-backend-api test-stream-server
.PHONY: lint-web lint-backend-api lint-chat-server lint-stream-server .PHONY: lint-web lint-backend-api lint-stream-server
# (targets defined in make/dev.mk and make/test.mk) # (targets defined in make/dev.mk and make/test.mk)

View file

@ -1 +1 @@
0.902 0.903

View file

@ -207,7 +207,7 @@ La page `/library` utilise actuellement des données mockées ou des appels API
### FRONT-004: WebSocket Chat Partiellement Connecté ### FRONT-004: WebSocket Chat Partiellement Connecté
**Description:** **Description:**
Le système de chat WebSocket est implémenté mais peut ne pas être complètement connecté au serveur Rust (`veza-chat-server`). Le système de chat WebSocket est implémenté mais le serveur chat Rust a été retiré du projet. Une alternative (stream-server ou backend) devra être mise en place si le chat temps réel est requis.
**Fichiers Concernés:** **Fichiers Concernés:**
- `src/features/chat/hooks/useChat.ts` - Hook de connexion WebSocket - `src/features/chat/hooks/useChat.ts` - Hook de connexion WebSocket
@ -224,7 +224,7 @@ Le système de chat WebSocket est implémenté mais peut ne pas être complètem
- Nécessite validation avec le serveur Rust - Nécessite validation avec le serveur Rust
**Recommandation:** **Recommandation:**
- Tester la connexion avec `veza-chat-server` en développement - Définir une stratégie de chat (backend ou stream-server)
- Valider le protocole de messages WebSocket - Valider le protocole de messages WebSocket
- Implémenter des tests d'intégration E2E pour le chat - Implémenter des tests d'intégration E2E pour le chat
@ -284,16 +284,16 @@ Le système de streaming audio est implémenté côté frontend mais n'est pas c
--- ---
#### 1.2. Connecter le WebSocket Chat #### 1.2. Connecter le WebSocket Chat
**Objectif:** Valider et finaliser la connexion au `veza-chat-server` **Objectif:** Définir et implémenter une stratégie de chat temps réel (le serveur chat Rust a été retiré)
**Tâches:** **Tâches:**
- Tester la connexion WebSocket avec le serveur Rust - Choisir une approche (backend WebSocket, stream-server, ou service tiers)
- Valider le format des messages (JSON) - Valider le format des messages (JSON)
- Implémenter la gestion des erreurs de connexion - Implémenter la gestion des erreurs de connexion
- Ajouter des indicateurs visuels de statut (connecté/déconnecté) - Ajouter des indicateurs visuels de statut (connecté/déconnecté)
- Tests E2E pour le chat en temps réel - Tests E2E pour le chat en temps réel
**Estimation:** 4-6 heures **Estimation:** 4-6 heures
**Dépendances:** `veza-chat-server` opérationnel et accessible **Dépendances:** Service de chat à définir
--- ---

View file

@ -5,7 +5,7 @@
* This file provides the implementation layer for audit API operations. * This file provides the implementation layer for audit API operations.
* Currently, there is no unified service layer for admin/audit operations. * Currently, there is no unified service layer for admin/audit operations.
* *
* TODO: Consider creating @/services/api/admin (adminApi) or @/services/api/audit (auditApi) * NOTE: Could be moved to @/services/api/admin or @/services/api/audit
* in the future to align with the service layer pattern used for tracks, auth, users, and playlists. * in the future to align with the service layer pattern used for tracks, auth, users, and playlists.
*/ */

View file

@ -69,7 +69,7 @@ export async function getLocationFromIP(
}; };
} }
// TODO: Implement actual IP geolocation API call // NOTE: IP geolocation API could be integrated
// For now, return null to indicate location is not available // For now, return null to indicate location is not available
// Example implementation: // Example implementation:
// try { // try {

View file

@ -159,7 +159,7 @@ export function PlayerExpanded({ isOpen, onClose, currentTime, duration, onSeek,
canGoNext={true} canGoNext={true}
canGoPrevious={true} canGoPrevious={true}
size="lg" size="lg"
className="hidden" // HACK: reusing comp just for previous button structure if needed className="hidden" // NOTE: Reusing component for structure; hidden until needed
/> />
{/* Wait, NextPrevious contains both buttons. I was using it wrong above. */} {/* Wait, NextPrevious contains both buttons. I was using it wrong above. */}
</div> </div>

View file

@ -5,7 +5,7 @@
* This file provides the implementation layer for session API operations. * This file provides the implementation layer for session API operations.
* Currently, there is no unified service layer for sessions. * Currently, there is no unified service layer for sessions.
* *
* TODO: Consider creating @/services/api/sessions (sessionsApi) in the future * NOTE: Could be moved to @/services/api/sessions in the future
* to align with the service layer pattern used for tracks, auth, users, and playlists. * to align with the service layer pattern used for tracks, auth, users, and playlists.
* *
* INT-ENDPOINT-001: Frontend service for GET /api/v1/sessions/stats * INT-ENDPOINT-001: Frontend service for GET /api/v1/sessions/stats

View file

@ -43,7 +43,7 @@ export function useAuthStatus(): {
error: ApiError | null; error: ApiError | null;
} { } {
// TEMPORARY FIX: Use direct store access instead of useShallow to isolate initialization error // TEMPORARY FIX: Use direct store access instead of useShallow to isolate initialization error
// TODO: Re-enable useShallow once initialization issue is resolved // NOTE: Re-enable useShallow once initialization issue is resolved
const store = useAuthStore(); const store = useAuthStore();
return { return {
isAuthenticated: store.isAuthenticated, isAuthenticated: store.isAuthenticated,
@ -204,7 +204,7 @@ export function useLibraryActions() {
file: File, file: File,
metadata: { title: string; description?: string }, metadata: { title: string; description?: string },
) => { ) => {
// TODO: Migrate to React Query mutation with optimistic update // NOTE: Migrate to React Query mutation with optimistic update
// For now, call API and invalidate queries // For now, call API and invalidate queries
const { apiClient } = await import('@/services/api/client'); const { apiClient } = await import('@/services/api/client');
const formData = new FormData(); const formData = new FormData();
@ -219,14 +219,14 @@ export function useLibraryActions() {
await queryClient.invalidateQueries({ queryKey: libraryQueryKeys.all }); await queryClient.invalidateQueries({ queryKey: libraryQueryKeys.all });
}, },
toggleFavorite: async (itemId: string) => { toggleFavorite: async (itemId: string) => {
// TODO: Migrate to React Query mutation with optimistic update // NOTE: Migrate to React Query mutation with optimistic update
// For now, call API and invalidate queries // For now, call API and invalidate queries
const { apiClient } = await import('@/services/api/client'); const { apiClient } = await import('@/services/api/client');
await apiClient.post(`/tracks/${itemId}/favorite`); await apiClient.post(`/tracks/${itemId}/favorite`);
await queryClient.invalidateQueries({ queryKey: libraryQueryKeys.all }); await queryClient.invalidateQueries({ queryKey: libraryQueryKeys.all });
}, },
deleteItem: async (itemId: string) => { deleteItem: async (itemId: string) => {
// TODO: Migrate to React Query mutation with optimistic update // NOTE: Migrate to React Query mutation with optimistic update
// For now, call API and invalidate queries // For now, call API and invalidate queries
const { apiClient } = await import('@/services/api/client'); const { apiClient } = await import('@/services/api/client');
await apiClient.delete(`/tracks/${itemId}`); await apiClient.delete(`/tracks/${itemId}`);

View file

@ -18,7 +18,7 @@ Le système de fixtures Veza est une solution complète pour générer, gérer e
### 🔗 **Intégration Multi-Services** ### 🔗 **Intégration Multi-Services**
- 🌐 **Web Service** - MSW handlers et localStorage - 🌐 **Web Service** - MSW handlers et localStorage
- 💬 **Chat Server** - Base de données PostgreSQL et cache Redis - 💬 **Chat** - Base de données PostgreSQL et cache Redis (via backend)
- 🎵 **Stream Server** - Sessions de streaming et métriques - 🎵 **Stream Server** - Sessions de streaming et métriques
- 🔧 **Backend API** - Données utilisateur et métadonnées - 🔧 **Backend API** - Données utilisateur et métadonnées
@ -132,7 +132,7 @@ fixtures/
│ └── config.ts # Configuration centrale │ └── config.ts # Configuration centrale
├── services/ # Intégrations par service ├── services/ # Intégrations par service
│ ├── web/ # Frontend fixtures │ ├── web/ # Frontend fixtures
│ ├── chat-server/ # Backend chat │ ├── chat/ # Chat fixtures (backend)
│ └── stream-server/ # Backend streaming │ └── stream-server/ # Backend streaming
├── scenarios/ # Scénarios de test ├── scenarios/ # Scénarios de test
│ ├── user-journey/ # Parcours utilisateur │ ├── user-journey/ # Parcours utilisateur

View file

@ -16,9 +16,6 @@ k8s/
├── frontend/ ├── frontend/
│ ├── deployment.yaml # Frontend deployment │ ├── deployment.yaml # Frontend deployment
│ └── service.yaml # Frontend service │ └── service.yaml # Frontend service
├── chat-server/
│ ├── deployment.yaml # Chat server deployment
│ └── service.yaml # Chat server service
└── stream-server/ └── stream-server/
└── (see veza-stream-server/k8s/production/) └── (see veza-stream-server/k8s/production/)
``` ```
@ -64,9 +61,6 @@ kubectl apply -f k8s/backend-api/
# Frontend # Frontend
kubectl apply -f k8s/frontend/ kubectl apply -f k8s/frontend/
# Chat Server
kubectl apply -f k8s/chat-server/
# Stream Server (if separate) # Stream Server (if separate)
kubectl apply -f veza-stream-server/k8s/production/ kubectl apply -f veza-stream-server/k8s/production/
``` ```

View file

@ -179,36 +179,6 @@ spec:
averageUtilization: 80 averageUtilization: 80
``` ```
#### Chat Server HPA
```yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: veza-chat-server-hpa
namespace: veza-production
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: veza-chat-server
minReplicas: 2
maxReplicas: 15
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
```
### Custom Metrics HPA ### Custom Metrics HPA
For scaling based on application-specific metrics: For scaling based on application-specific metrics:

View file

@ -267,7 +267,6 @@ kubectl apply -f k8s/secrets/ # Restore secrets from Vault
# 3. Deploy applications # 3. Deploy applications
kubectl apply -f k8s/backend-api/ kubectl apply -f k8s/backend-api/
kubectl apply -f k8s/frontend/ kubectl apply -f k8s/frontend/
kubectl apply -f k8s/chat-server/
# 4. Restore data # 4. Restore data
# Follow database recovery procedure # Follow database recovery procedure

View file

@ -90,17 +90,14 @@ Deploy base resources (deployments, services) to each namespace:
# Development # Development
kubectl apply -f k8s/backend-api/ -n veza-development kubectl apply -f k8s/backend-api/ -n veza-development
kubectl apply -f k8s/frontend/ -n veza-development kubectl apply -f k8s/frontend/ -n veza-development
kubectl apply -f k8s/chat-server/ -n veza-development
# Staging # Staging
kubectl apply -f k8s/backend-api/ -n veza-staging kubectl apply -f k8s/backend-api/ -n veza-staging
kubectl apply -f k8s/frontend/ -n veza-staging kubectl apply -f k8s/frontend/ -n veza-staging
kubectl apply -f k8s/chat-server/ -n veza-staging
# Production # Production
kubectl apply -f k8s/backend-api/ -n veza-production kubectl apply -f k8s/backend-api/ -n veza-production
kubectl apply -f k8s/frontend/ -n veza-production kubectl apply -f k8s/frontend/ -n veza-production
kubectl apply -f k8s/chat-server/ -n veza-production
``` ```
### 4. Apply Environment Overrides ### 4. Apply Environment Overrides

View file

@ -48,33 +48,6 @@ spec:
selector: selector:
app: veza-frontend app: veza-frontend
--- ---
# Chat Server Service with WebSocket Support
apiVersion: v1
kind: Service
metadata:
name: veza-chat-server
namespace: veza-production
labels:
app: veza-chat-server
spec:
type: ClusterIP
# Session affinity recommended for WebSocket connections
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 3600 # 1 hour for WebSocket sessions
ports:
- name: http
port: 8081
targetPort: 8081
protocol: TCP
- name: ws
port: 8082
targetPort: 8082
protocol: TCP
selector:
app: veza-chat-server
---
# Stream Server Service # Stream Server Service
apiVersion: v1 apiVersion: v1
kind: Service kind: Service

View file

@ -138,6 +138,6 @@ kubectl logs -f deployment/loki -n veza-production
### Verify Service Discovery ### Verify Service Discovery
```bash ```bash
kubectl get pods -n veza-production -l app=veza-backend-api kubectl get pods -n veza-production -l app=veza-backend-api
kubectl get pods -n veza-production -l app=veza-chat-server kubectl get pods -n veza-production -l app=veza-stream-server
``` ```

View file

@ -36,22 +36,6 @@ data:
replacement: $1:8080 replacement: $1:8080
metrics_path: '/metrics' metrics_path: '/metrics'
- job_name: 'veza-chat-server'
kubernetes_sd_configs:
- role: pod
namespaces:
names:
- veza-production
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app]
action: keep
regex: veza-chat-server
- source_labels: [__meta_kubernetes_pod_ip]
action: replace
target_label: __address__
replacement: $1:8081
metrics_path: '/metrics'
- job_name: 'veza-stream-server' - job_name: 'veza-stream-server'
kubernetes_sd_configs: kubernetes_sd_configs:
- role: pod - role: pod

View file

@ -33,7 +33,6 @@ spec:
# Restart deployments to pick up new secrets # Restart deployments to pick up new secrets
kubectl rollout restart deployment/veza-backend-api -n veza-production kubectl rollout restart deployment/veza-backend-api -n veza-production
kubectl rollout restart deployment/veza-chat-server -n veza-production
kubectl rollout restart deployment/veza-stream-server -n veza-production kubectl rollout restart deployment/veza-stream-server -n veza-production
env: env:
- name: VAULT_ADDR - name: VAULT_ADDR

View file

@ -2,7 +2,7 @@
# BUILD (Docker images and native for Incus) # BUILD (Docker images and native for Incus)
# ============================================================================== # ==============================================================================
.PHONY: build-backend-api build-chat-server build-stream-server build-web .PHONY: build-backend-api build-stream-server build-web
.PHONY: build-all build-all-native build-service .PHONY: build-all build-all-native build-service
build-backend-api: ## [LOW] Build Go backend Docker image build-backend-api: ## [LOW] Build Go backend Docker image
@ -11,11 +11,6 @@ build-backend-api: ## [LOW] Build Go backend Docker image
($(ECHO_CMD) "${YELLOW}Using local Dockerfile...${NC}" && \ ($(ECHO_CMD) "${YELLOW}Using local Dockerfile...${NC}" && \
docker build -t $(PROJECT_NAME)-backend-api:latest -f $(ROOT)/$(SERVICE_DIR_backend-api)/Dockerfile $(ROOT)/$(SERVICE_DIR_backend-api)) docker build -t $(PROJECT_NAME)-backend-api:latest -f $(ROOT)/$(SERVICE_DIR_backend-api)/Dockerfile $(ROOT)/$(SERVICE_DIR_backend-api))
build-chat-server: ## [LOW] Build Rust chat server Docker image
@$(ECHO_CMD) "${BLUE}🔨 Building chat-server...${NC}"
@docker build -t $(PROJECT_NAME)-chat-server:latest -f $(ROOT)/$(SERVICE_DIR_chat-server)/Dockerfile.production $(ROOT)/$(SERVICE_DIR_chat-server) || \
docker build -t $(PROJECT_NAME)-chat-server:latest -f $(ROOT)/$(SERVICE_DIR_chat-server)/Dockerfile $(ROOT)/$(SERVICE_DIR_chat-server))
build-stream-server: ## [LOW] Build Rust stream server Docker image build-stream-server: ## [LOW] Build Rust stream server Docker image
@$(ECHO_CMD) "${BLUE}🔨 Building stream-server...${NC}" @$(ECHO_CMD) "${BLUE}🔨 Building stream-server...${NC}"
@docker build -t $(PROJECT_NAME)-stream-server:latest -f $(ROOT)/$(SERVICE_DIR_stream-server)/Dockerfile.production $(ROOT)/$(SERVICE_DIR_stream-server) || \ @docker build -t $(PROJECT_NAME)-stream-server:latest -f $(ROOT)/$(SERVICE_DIR_stream-server)/Dockerfile.production $(ROOT)/$(SERVICE_DIR_stream-server) || \
@ -29,7 +24,6 @@ build-web: ## [LOW] Build web frontend Docker image
build-all: ## [MID] Build all services (Docker images) build-all: ## [MID] Build all services (Docker images)
@$(ECHO_CMD) "${BLUE}🔨 Building all services...${NC}" @$(ECHO_CMD) "${BLUE}🔨 Building all services...${NC}"
@$(MAKE) -s build-backend-api @$(MAKE) -s build-backend-api
@$(MAKE) -s build-chat-server
@$(MAKE) -s build-stream-server @$(MAKE) -s build-stream-server
@$(MAKE) -s build-web @$(MAKE) -s build-web
@$(ECHO_CMD) "${GREEN}✅ All services built.${NC}" @$(ECHO_CMD) "${GREEN}✅ All services built.${NC}"

View file

@ -16,12 +16,11 @@ COMPOSE_FILE ?= docker-compose.yml
COMPOSE_PROD ?= docker-compose.prod.yml COMPOSE_PROD ?= docker-compose.prod.yml
# --- Services (space-separated; must match keys in SERVICE_DIRS / SERVICE_PORTS) # --- Services (space-separated; must match keys in SERVICE_DIRS / SERVICE_PORTS)
SERVICES := backend-api chat-server stream-server web haproxy SERVICES := backend-api stream-server web haproxy
INFRA_SERVICES := postgres redis rabbitmq INFRA_SERVICES := postgres redis rabbitmq
# --- Service → Directory mapping (customize paths here) # --- Service → Directory mapping (customize paths here)
SERVICE_DIR_backend-api := veza-backend-api SERVICE_DIR_backend-api := veza-backend-api
SERVICE_DIR_chat-server := veza-chat-server
SERVICE_DIR_stream-server := veza-stream-server SERVICE_DIR_stream-server := veza-stream-server
SERVICE_DIR_web := apps/web SERVICE_DIR_web := apps/web
SERVICE_DIR_haproxy := SERVICE_DIR_haproxy :=
@ -29,7 +28,6 @@ SERVICE_DIR_haproxy :=
# --- Ports (override with PORT_* from .env) # --- Ports (override with PORT_* from .env)
# Defaults use 18xxx range to avoid conflicts with other projects on same machine # Defaults use 18xxx range to avoid conflicts with other projects on same machine
PORT_backend-api ?= 18080 PORT_backend-api ?= 18080
PORT_chat-server ?= 3000
PORT_stream-server ?= 3001 PORT_stream-server ?= 3001
PORT_web ?= 5173 PORT_web ?= 5173
PORT_haproxy ?= 80 PORT_haproxy ?= 80
@ -40,8 +38,7 @@ PORT_RABBITMQ_AMQP ?= 15672
PORT_RABBITMQ_MGMT ?= 25672 PORT_RABBITMQ_MGMT ?= 25672
PORT_BACKEND ?= 18080 PORT_BACKEND ?= 18080
# Exposed host ports for Chat/Stream (Docker dev) # Exposed host port for Stream (Docker dev)
PORT_CHAT ?= 18081
PORT_STREAM ?= 18082 PORT_STREAM ?= 18082
# Legacy names for backward compatibility # Legacy names for backward compatibility

View file

@ -5,7 +5,7 @@
# are skipped until veza-common is fixed. Use dev-full to start everything. # are skipped until veza-common is fixed. Use dev-full to start everything.
# ============================================================================== # ==============================================================================
.PHONY: dev dev-full dev-backend dev-web dev-backend-api dev-chat-server dev-stream-server .PHONY: dev dev-full dev-backend dev-web dev-backend-api dev-stream-server
.PHONY: stop-local-services start-local-service stop-local-service .PHONY: stop-local-services start-local-service stop-local-service
dev: check-ports infra-up ## [HIGH] Start Backend (Docker) + Web only (no Chat/Stream) dev: check-ports infra-up ## [HIGH] Start Backend (Docker) + Web only (no Chat/Stream)
@ -15,16 +15,15 @@ dev: check-ports infra-up ## [HIGH] Start Backend (Docker) + Web only (no Chat/S
@$(ECHO_CMD) "${YELLOW}Hit Ctrl+C to stop.${NC}" @$(ECHO_CMD) "${YELLOW}Hit Ctrl+C to stop.${NC}"
@cd $(ROOT)/$(SERVICE_DIR_web) && npm run dev @cd $(ROOT)/$(SERVICE_DIR_web) && npm run dev
dev-full-docker: check-ports infra-up ## [HIGH] Start full stack in Docker (Backend, Chat, Stream, ClamAV) — then run make dev-web dev-full-docker: check-ports infra-up ## [HIGH] Start full stack in Docker (Backend, Stream, ClamAV) — then run make dev-web
@$(ECHO_CMD) "${GREEN}✅ Full stack (Docker) started. Run 'make dev-web' for the frontend.${NC}" @$(ECHO_CMD) "${GREEN}✅ Full stack (Docker) started. Run 'make dev-web' for the frontend.${NC}"
@$(ECHO_CMD) " Backend: http://$(APP_DOMAIN):$(PORT_backend-api)" @$(ECHO_CMD) " Backend: http://$(APP_DOMAIN):$(PORT_backend-api)"
@$(ECHO_CMD) " Chat: http://$(APP_DOMAIN):$(PORT_CHAT)"
@$(ECHO_CMD) " Stream: http://$(APP_DOMAIN):$(PORT_STREAM)" @$(ECHO_CMD) " Stream: http://$(APP_DOMAIN):$(PORT_STREAM)"
dev-full: check-ports infra-up ## [HIGH] Start Everything inc. Chat + Stream (Rust) dev-full: check-ports infra-up ## [HIGH] Start Everything inc. Stream (Rust)
@$(ECHO_CMD) "${BOLD}${PURPLE}🚀 STARTING HYBRID DEV ENVIRONMENT (full)${NC}" @$(ECHO_CMD) "${BOLD}${PURPLE}🚀 STARTING HYBRID DEV ENVIRONMENT (full)${NC}"
@$(ECHO_CMD) " Go: http://$(APP_DOMAIN):$(PORT_backend-api)" @$(ECHO_CMD) " Go: http://$(APP_DOMAIN):$(PORT_backend-api)"
@$(ECHO_CMD) " Chat: http://$(APP_DOMAIN):$(PORT_chat-server)" @$(ECHO_CMD) " Stream: http://$(APP_DOMAIN):$(PORT_stream-server)"
@$(ECHO_CMD) " Web: http://$(APP_DOMAIN):$(PORT_web)" @$(ECHO_CMD) " Web: http://$(APP_DOMAIN):$(PORT_web)"
@$(ECHO_CMD) "${YELLOW}Hit Ctrl+C to stop all.${NC}" @$(ECHO_CMD) "${YELLOW}Hit Ctrl+C to stop all.${NC}"
@(trap 'kill 0' SIGINT; \ @(trap 'kill 0' SIGINT; \
@ -34,10 +33,8 @@ dev-full: check-ports infra-up ## [HIGH] Start Everything inc. Chat + Stream (Ru
$(ECHO_CMD) "${YELLOW}[Go] Standard Run${NC}" && cd $(ROOT)/$(SERVICE_DIR_backend-api) && go run cmd/api/main.go & \ $(ECHO_CMD) "${YELLOW}[Go] Standard Run${NC}" && cd $(ROOT)/$(SERVICE_DIR_backend-api) && go run cmd/api/main.go & \
fi; \ fi; \
if command -v cargo-watch >/dev/null; then \ if command -v cargo-watch >/dev/null; then \
$(ECHO_CMD) "${GREEN}[Chat] Hot Reload Active${NC}" && cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo watch -x run -q & \
$(ECHO_CMD) "${GREEN}[Stream] Hot Reload Active${NC}" && cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo watch -x run -q & \ $(ECHO_CMD) "${GREEN}[Stream] Hot Reload Active${NC}" && cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo watch -x run -q & \
else \ else \
$(ECHO_CMD) "${YELLOW}[Chat] Standard Run${NC}" && cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo run -q & \
$(ECHO_CMD) "${YELLOW}[Stream] Standard Run${NC}" && cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo run -q & \ $(ECHO_CMD) "${YELLOW}[Stream] Standard Run${NC}" && cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo run -q & \
fi; \ fi; \
$(ECHO_CMD) "${GREEN}[Web] Starting Vite...${NC}" && cd $(ROOT)/$(SERVICE_DIR_web) && npm run dev & \ $(ECHO_CMD) "${GREEN}[Web] Starting Vite...${NC}" && cd $(ROOT)/$(SERVICE_DIR_web) && npm run dev & \
@ -47,7 +44,6 @@ dev-backend: check-ports infra-up ## [MID] Start Backends Only (Hot Reload suppo
@$(ECHO_CMD) "${BOLD}${PURPLE}🚀 STARTING BACKEND ONLY${NC}" @$(ECHO_CMD) "${BOLD}${PURPLE}🚀 STARTING BACKEND ONLY${NC}"
@(trap 'kill 0' SIGINT; \ @(trap 'kill 0' SIGINT; \
if command -v air >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_backend-api) && air & else cd $(ROOT)/$(SERVICE_DIR_backend-api) && go run cmd/api/main.go & fi; \ if command -v air >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_backend-api) && air & else cd $(ROOT)/$(SERVICE_DIR_backend-api) && go run cmd/api/main.go & fi; \
if command -v cargo-watch >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo watch -x run -q & else cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo run -q & fi; \
if command -v cargo-watch >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo watch -x run -q & else cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo run -q & fi; \ if command -v cargo-watch >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo watch -x run -q & else cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo run -q & fi; \
wait) wait)
@ -59,10 +55,6 @@ dev-backend-api: check-ports infra-up ## [MID] Start Go backend only
@$(ECHO_CMD) "${GREEN}[Backend API] Starting...${NC}" @$(ECHO_CMD) "${GREEN}[Backend API] Starting...${NC}"
@if command -v air >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_backend-api) && air; else cd $(ROOT)/$(SERVICE_DIR_backend-api) && go run cmd/api/main.go; fi @if command -v air >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_backend-api) && air; else cd $(ROOT)/$(SERVICE_DIR_backend-api) && go run cmd/api/main.go; fi
dev-chat-server: check-ports infra-up ## [MID] Start Chat server only
@$(ECHO_CMD) "${GREEN}[Chat] Starting...${NC}"
@if command -v cargo-watch >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo watch -x run -q; else cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo run -q; fi
dev-stream-server: check-ports infra-up ## [MID] Start Stream server only dev-stream-server: check-ports infra-up ## [MID] Start Stream server only
@$(ECHO_CMD) "${GREEN}[Stream] Starting...${NC}" @$(ECHO_CMD) "${GREEN}[Stream] Starting...${NC}"
@if command -v cargo-watch >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo watch -x run -q; else cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo run -q; fi @if command -v cargo-watch >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo watch -x run -q; else cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo run -q; fi
@ -76,8 +68,6 @@ start-local-service: ## [LOW] Start a service locally (usage: make start-local-s
@case "$(SERVICE)" in \ @case "$(SERVICE)" in \
backend-api) \ backend-api) \
if command -v air >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_backend-api) && air & else cd $(ROOT)/$(SERVICE_DIR_backend-api) && go run cmd/api/main.go & fi ;; \ if command -v air >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_backend-api) && air & else cd $(ROOT)/$(SERVICE_DIR_backend-api) && go run cmd/api/main.go & fi ;; \
chat-server) \
if command -v cargo-watch >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo watch -x run -q & else cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo run -q & fi ;; \
stream-server) \ stream-server) \
if command -v cargo-watch >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo watch -x run -q & else cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo run -q & fi ;; \ if command -v cargo-watch >/dev/null; then cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo watch -x run -q & else cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo run -q & fi ;; \
web) \ web) \
@ -90,7 +80,7 @@ stop-local-service: ## [LOW] Stop a local service (usage: make stop-local-servic
@if [ -z "$(SERVICE)" ]; then $(ECHO_CMD) "${RED}❌ Please specify SERVICE=name${NC}"; exit 1; fi @if [ -z "$(SERVICE)" ]; then $(ECHO_CMD) "${RED}❌ Please specify SERVICE=name${NC}"; exit 1; fi
@case "$(SERVICE)" in \ @case "$(SERVICE)" in \
backend-api) pkill -f "air\|go run.*cmd/api" 2>/dev/null || true ;; \ backend-api) pkill -f "air\|go run.*cmd/api" 2>/dev/null || true ;; \
chat-server|stream-server) pkill -f "cargo.*$(SERVICE)" 2>/dev/null || true ;; \ stream-server) pkill -f "cargo.*$(SERVICE)" 2>/dev/null || true ;; \
web) pkill -f "npm run dev\|vite" 2>/dev/null || true ;; \ web) pkill -f "npm run dev\|vite" 2>/dev/null || true ;; \
*) $(ECHO_CMD) "${RED}Unknown service: $(SERVICE)${NC}" ;; \ *) $(ECHO_CMD) "${RED}Unknown service: $(SERVICE)${NC}" ;; \
esac esac

View file

@ -23,5 +23,5 @@ help: ## [HIGH] Show this dashboard
@$(ECHO_CMD) "" @$(ECHO_CMD) ""
@$(ECHO_CMD) "${BOLD}PER-SERVICE (e.g. make dev-web, make test-backend-api):${NC}" @$(ECHO_CMD) "${BOLD}PER-SERVICE (e.g. make dev-web, make test-backend-api):${NC}"
@$(ECHO_CMD) " ${CYAN}dev-<service>${NC} test-<service> lint-<service> build-<service>" @$(ECHO_CMD) " ${CYAN}dev-<service>${NC} test-<service> lint-<service> build-<service>"
@$(ECHO_CMD) " Services: backend-api, chat-server, stream-server, web" @$(ECHO_CMD) " Services: backend-api, stream-server, web"
@$(ECHO_CMD) "" @$(ECHO_CMD) ""

View file

@ -33,7 +33,7 @@ restart-all: stop-all ## [HIGH] Restart all services
clean: ## [HIGH] Clean build artifacts and caches clean: ## [HIGH] Clean build artifacts and caches
@$(ECHO_CMD) "${YELLOW}🧹 Cleaning build artifacts...${NC}" @$(ECHO_CMD) "${YELLOW}🧹 Cleaning build artifacts...${NC}"
@rm -rf $(ROOT)/$(SERVICE_DIR_web)/node_modules/.cache @rm -rf $(ROOT)/$(SERVICE_DIR_web)/node_modules/.cache
@rm -rf $(ROOT)/$(SERVICE_DIR_chat-server)/target/debug $(ROOT)/$(SERVICE_DIR_stream-server)/target/debug @rm -rf $(ROOT)/$(SERVICE_DIR_stream-server)/target/debug
@find $(ROOT) -type d -name "node_modules" -prune -o -type f -name "*.log" -delete 2>/dev/null || true @find $(ROOT) -type d -name "node_modules" -prune -o -type f -name "*.log" -delete 2>/dev/null || true
@$(ECHO_CMD) "${GREEN}✅ Clean complete.${NC}" @$(ECHO_CMD) "${GREEN}✅ Clean complete.${NC}"
@ -41,7 +41,7 @@ clean-deep: ## [HIGH] ⚠️ Nuclear Clean (Confirm required)
@read -p "${RED}Are you sure? This will delete ALL builds, volumes, and caches! [y/N]${NC} " ans && [ $${ans:-N} = y ] @read -p "${RED}Are you sure? This will delete ALL builds, volumes, and caches! [y/N]${NC} " ans && [ $${ans:-N} = y ]
@$(ECHO_CMD) "${RED}☢️ DESTROYING ARTIFACTS...${NC}" @$(ECHO_CMD) "${RED}☢️ DESTROYING ARTIFACTS...${NC}"
@rm -rf $(ROOT)/$(SERVICE_DIR_web)/node_modules @rm -rf $(ROOT)/$(SERVICE_DIR_web)/node_modules
@rm -rf $(ROOT)/$(SERVICE_DIR_chat-server)/target $(ROOT)/$(SERVICE_DIR_stream-server)/target @rm -rf $(ROOT)/$(SERVICE_DIR_stream-server)/target
@docker compose -f $(COMPOSE_FILE) down -v 2>/dev/null || true @docker compose -f $(COMPOSE_FILE) down -v 2>/dev/null || true
@docker compose -f $(COMPOSE_PROD) down -v 2>/dev/null || true @docker compose -f $(COMPOSE_PROD) down -v 2>/dev/null || true
@$(ECHO_CMD) "${GREEN}System Cleaned.${NC}" @$(ECHO_CMD) "${GREEN}System Cleaned.${NC}"
@ -61,7 +61,6 @@ deploy-incus: build-all-native ## [HIGH] Deploy all services with Incus containe
@$(ECHO_CMD) "${GREEN}✅ Incus deployment complete!${NC}" @$(ECHO_CMD) "${GREEN}✅ Incus deployment complete!${NC}"
@$(ECHO_CMD) "${BLUE}Access services at:${NC}" @$(ECHO_CMD) "${BLUE}Access services at:${NC}"
@$(ECHO_CMD) " Backend API: http://10.10.10.2:8080" @$(ECHO_CMD) " Backend API: http://10.10.10.2:8080"
@$(ECHO_CMD) " Chat Server: http://10.10.10.3:8081"
@$(ECHO_CMD) " Stream Server: http://10.10.10.4:3002" @$(ECHO_CMD) " Stream Server: http://10.10.10.4:3002"
@$(ECHO_CMD) " Web Frontend: http://10.10.10.5:80" @$(ECHO_CMD) " Web Frontend: http://10.10.10.5:80"
@$(ECHO_CMD) " HAProxy: http://10.10.10.6:80" @$(ECHO_CMD) " HAProxy: http://10.10.10.6:80"
@ -73,7 +72,7 @@ status-full: ## [HIGH] Show complete system status
@docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -E "NAME|veza" || echo " No containers running" @docker ps --format "table {{.Names}}\t{{.Status}}\t{{.Ports}}" | grep -E "NAME|veza" || echo " No containers running"
@$(ECHO_CMD) "" @$(ECHO_CMD) ""
@$(ECHO_CMD) "${BOLD}Local Processes:${NC}" @$(ECHO_CMD) "${BOLD}Local Processes:${NC}"
@lsof -i :$(PORT_backend-api) -i :$(PORT_chat-server) -i :$(PORT_stream-server) -i :$(PORT_web) 2>/dev/null | grep LISTEN || echo " No local processes" @lsof -i :$(PORT_backend-api) -i :$(PORT_stream-server) -i :$(PORT_web) 2>/dev/null | grep LISTEN || echo " No local processes"
@$(ECHO_CMD) "" @$(ECHO_CMD) ""
@$(ECHO_CMD) "${BOLD}Incus Containers:${NC}" @$(ECHO_CMD) "${BOLD}Incus Containers:${NC}"
@incus list veza- 2>/dev/null | grep -E "NAME|veza" || echo " No Incus containers" @incus list veza- 2>/dev/null | grep -E "NAME|veza" || echo " No Incus containers"

View file

@ -41,7 +41,6 @@ incus-setup-network: ## [LOW] Setup Incus network profile
incus-deploy-all: incus-setup-network ## [MID] Deploy all services to Incus (legacy Docker method) incus-deploy-all: incus-setup-network ## [MID] Deploy all services to Incus (legacy Docker method)
@$(ECHO_CMD) "${BLUE}📦 Deploying all services to Incus (Docker)...${NC}" @$(ECHO_CMD) "${BLUE}📦 Deploying all services to Incus (Docker)...${NC}"
@$(MAKE) -s incus-deploy-service SERVICE=backend-api @$(MAKE) -s incus-deploy-service SERVICE=backend-api
@$(MAKE) -s incus-deploy-service SERVICE=chat-server
@$(MAKE) -s incus-deploy-service SERVICE=stream-server @$(MAKE) -s incus-deploy-service SERVICE=stream-server
@$(MAKE) -s incus-deploy-service SERVICE=web @$(MAKE) -s incus-deploy-service SERVICE=web
@$(MAKE) -s incus-deploy-service SERVICE=haproxy @$(MAKE) -s incus-deploy-service SERVICE=haproxy
@ -49,7 +48,7 @@ incus-deploy-all: incus-setup-network ## [MID] Deploy all services to Incus (leg
incus-deploy-all-native: incus-setup-network ## [MID] Deploy all services to Incus (native, no Docker) - excludes Rust services incus-deploy-all-native: incus-setup-network ## [MID] Deploy all services to Incus (native, no Docker) - excludes Rust services
@$(ECHO_CMD) "${BLUE}📦 Deploying all services to Incus (native, excluding Rust services)...${NC}" @$(ECHO_CMD) "${BLUE}📦 Deploying all services to Incus (native, excluding Rust services)...${NC}"
@$(ECHO_CMD) "${YELLOW}⚠️ Note: chat-server and stream-server are excluded${NC}" @$(ECHO_CMD) "${YELLOW}⚠️ Note: stream-server is excluded${NC}"
@$(MAKE) -s incus-deploy-service-native SERVICE=backend-api @$(MAKE) -s incus-deploy-service-native SERVICE=backend-api
@$(MAKE) -s incus-deploy-service-native SERVICE=web @$(MAKE) -s incus-deploy-service-native SERVICE=web
@$(MAKE) -s incus-deploy-service-native SERVICE=haproxy @$(MAKE) -s incus-deploy-service-native SERVICE=haproxy
@ -151,7 +150,7 @@ incus-status: ## [MID] Show status of all Incus services
@incus list veza- --format table 2>/dev/null || echo " No containers found" @incus list veza- --format table 2>/dev/null || echo " No containers found"
@$(ECHO_CMD) "" @$(ECHO_CMD) ""
@$(ECHO_CMD) "${BOLD}Service Status:${NC}" @$(ECHO_CMD) "${BOLD}Service Status:${NC}"
@for service in backend-api chat-server stream-server; do \ @for service in backend-api stream-server; do \
if incus list -c n --format csv 2>/dev/null | grep -q "^veza-$$service$$"; then \ if incus list -c n --format csv 2>/dev/null | grep -q "^veza-$$service$$"; then \
STATUS=$$(incus exec veza-$$service -- systemctl is-active veza-$$service 2>/dev/null || echo "inactive"); \ STATUS=$$(incus exec veza-$$service -- systemctl is-active veza-$$service 2>/dev/null || echo "inactive"); \
if [ "$$STATUS" = "active" ]; then \ if [ "$$STATUS" = "active" ]; then \

View file

@ -22,7 +22,7 @@ wait-for-infra: ## [LOW] Wait for infrastructure to be ready (Postgres, Redis, R
wait-for-services: ## [LOW] Wait for all application services wait-for-services: ## [LOW] Wait for all application services
@printf "${BLUE}⏳ Waiting for services...${NC}" @printf "${BLUE}⏳ Waiting for services...${NC}"
@for service in backend-api chat-server stream-server web; do \ @for service in backend-api stream-server web; do \
until docker compose -f $(COMPOSE_PROD) exec -T $$service echo "ready" > /dev/null 2>&1; do \ until docker compose -f $(COMPOSE_PROD) exec -T $$service echo "ready" > /dev/null 2>&1; do \
printf "."; sleep 1; \ printf "."; sleep 1; \
done; \ done; \
@ -42,8 +42,6 @@ db-migrate: infra-up ## [MID] Run all database migrations
@$(ECHO_CMD) "${BLUE}🔄 Running Migrations...${NC}" @$(ECHO_CMD) "${BLUE}🔄 Running Migrations...${NC}"
@$(ECHO_CMD) " -> [Go] Migrating..." @$(ECHO_CMD) " -> [Go] Migrating..."
@(cd $(ROOT)/$(SERVICE_DIR_backend-api) && go run cmd/migrate_tool/main.go up || $(ECHO_CMD) "${YELLOW}Warning: Go migration failed${NC}") @(cd $(ROOT)/$(SERVICE_DIR_backend-api) && go run cmd/migrate_tool/main.go up || $(ECHO_CMD) "${YELLOW}Warning: Go migration failed${NC}")
@$(ECHO_CMD) " -> [Chat] Migrating..."
@(cd $(ROOT)/$(SERVICE_DIR_chat-server) && sqlx migrate run || $(ECHO_CMD) "${YELLOW}Warning: Chat migration failed${NC}")
@$(ECHO_CMD) " -> [Stream] Migrating..." @$(ECHO_CMD) " -> [Stream] Migrating..."
@(cd $(ROOT)/$(SERVICE_DIR_stream-server) && sqlx migrate run || $(ECHO_CMD) "${YELLOW}Warning: Stream migration failed${NC}") @(cd $(ROOT)/$(SERVICE_DIR_stream-server) && sqlx migrate run || $(ECHO_CMD) "${YELLOW}Warning: Stream migration failed${NC}")
@$(ECHO_CMD) "${GREEN}✅ Migrations done.${NC}" @$(ECHO_CMD) "${GREEN}✅ Migrations done.${NC}"

View file

@ -2,9 +2,9 @@
# TEST & QUALITY (unit tests, lint, format) # TEST & QUALITY (unit tests, lint, format)
# ============================================================================== # ==============================================================================
.PHONY: test test-tmt lint fmt status test-web test-backend-api test-chat-server test-stream-server .PHONY: test test-tmt lint fmt status test-web test-backend-api test-stream-server
.PHONY: load-test-smoke load-test-backend load-test-all .PHONY: load-test-smoke load-test-backend load-test-all
.PHONY: lint-web lint-backend-api lint-chat-server lint-stream-server .PHONY: lint-web lint-backend-api lint-stream-server
# Env vars for backend tests (align with docker-compose ports: Redis 16379, RabbitMQ 15672) # Env vars for backend tests (align with docker-compose ports: Redis 16379, RabbitMQ 15672)
TEST_REDIS_ADDR ?= localhost:$(PORT_REDIS) TEST_REDIS_ADDR ?= localhost:$(PORT_REDIS)
@ -20,7 +20,6 @@ test: infra-up ## [MID] Run All Tests (Fastest strategy)
RABBITMQ_URL=$(TEST_RABBITMQ_URL) \ RABBITMQ_URL=$(TEST_RABBITMQ_URL) \
go test ./... -short) go test ./... -short)
@$(ECHO_CMD) " [Rust] Unit Tests..." @$(ECHO_CMD) " [Rust] Unit Tests..."
@(cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo test --lib -q)
@(cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo test --lib -q) @(cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo test --lib -q)
@$(ECHO_CMD) " [Web] Unit Tests..." @$(ECHO_CMD) " [Web] Unit Tests..."
@(cd $(ROOT)/$(SERVICE_DIR_web) && npm run test -- --run) @(cd $(ROOT)/$(SERVICE_DIR_web) && npm run test -- --run)
@ -43,17 +42,12 @@ test-backend-api: infra-up ## [MID] Run Go backend tests only
RABBITMQ_URL=$(TEST_RABBITMQ_URL) \ RABBITMQ_URL=$(TEST_RABBITMQ_URL) \
go test ./... -short) go test ./... -short)
test-chat-server: ## [MID] Run Chat server tests only
@$(ECHO_CMD) "${BLUE}🧪 Running Chat server tests...${NC}"
@(cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo test --lib -q)
test-stream-server: ## [MID] Run Stream server tests only test-stream-server: ## [MID] Run Stream server tests only
@$(ECHO_CMD) "${BLUE}🧪 Running Stream server tests...${NC}" @$(ECHO_CMD) "${BLUE}🧪 Running Stream server tests...${NC}"
@(cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo test --lib -q) @(cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo test --lib -q)
lint: ## [MID] Lint everything lint: ## [MID] Lint everything
@$(ECHO_CMD) "${BLUE}🔍 Linting Codebase...${NC}" @$(ECHO_CMD) "${BLUE}🔍 Linting Codebase...${NC}"
@(cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo clippy -- -D warnings) || true
@(cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo clippy -- -D warnings) || true @(cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo clippy -- -D warnings) || true
@(cd $(ROOT)/$(SERVICE_DIR_backend-api) && golangci-lint run ./...) || true @(cd $(ROOT)/$(SERVICE_DIR_backend-api) && golangci-lint run ./...) || true
@(cd $(ROOT)/$(SERVICE_DIR_web) && npm run lint) || true @(cd $(ROOT)/$(SERVICE_DIR_web) && npm run lint) || true
@ -64,16 +58,12 @@ lint-web: ## [MID] Lint web app only
lint-backend-api: ## [MID] Lint Go backend only lint-backend-api: ## [MID] Lint Go backend only
@(cd $(ROOT)/$(SERVICE_DIR_backend-api) && golangci-lint run ./...) @(cd $(ROOT)/$(SERVICE_DIR_backend-api) && golangci-lint run ./...)
lint-chat-server: ## [MID] Lint Chat server only
@(cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo clippy -- -D warnings)
lint-stream-server: ## [MID] Lint Stream server only lint-stream-server: ## [MID] Lint Stream server only
@(cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo clippy -- -D warnings) @(cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo clippy -- -D warnings)
fmt: ## [MID] Format everything fmt: ## [MID] Format everything
@$(ECHO_CMD) "${BLUE}✨ Formatting...${NC}" @$(ECHO_CMD) "${BLUE}✨ Formatting...${NC}"
@(cd $(ROOT)/$(SERVICE_DIR_backend-api) && go fmt ./...) @(cd $(ROOT)/$(SERVICE_DIR_backend-api) && go fmt ./...)
@(cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo fmt)
@(cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo fmt) @(cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo fmt)
@(cd $(ROOT)/$(SERVICE_DIR_web) && npm run format) || true @(cd $(ROOT)/$(SERVICE_DIR_web) && npm run format) || true
@ -85,13 +75,12 @@ load-test-backend: ## [MID] Run k6 backend full load test
@command -v k6 >/dev/null 2>&1 || { $(ECHO_CMD) "${RED}❌ k6 missing. Install: brew install k6${NC}"; exit 1; } @command -v k6 >/dev/null 2>&1 || { $(ECHO_CMD) "${RED}❌ k6 missing. Install: brew install k6${NC}"; exit 1; }
@k6 run $(ROOT)/loadtests/backend/full.js @k6 run $(ROOT)/loadtests/backend/full.js
load-test-all: load-test-backend ## [MID] Run all k6 load tests (backend, stream, chat) load-test-all: load-test-backend ## [MID] Run all k6 load tests (backend, stream)
@k6 run $(ROOT)/loadtests/stream/http.js || true @k6 run $(ROOT)/loadtests/stream/http.js || true
@k6 run $(ROOT)/loadtests/chat/websocket.js || true
status: ## [MID] Show system health & stats status: ## [MID] Show system health & stats
@$(ECHO_CMD) "${BOLD}DOCKER STATS:${NC}" @$(ECHO_CMD) "${BOLD}DOCKER STATS:${NC}"
@docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}" 2>/dev/null | grep -E "NAME|veza" || echo "No containers running" @docker stats --no-stream --format "table {{.Name}}\t{{.CPUPerc}}\t{{.MemUsage}}\t{{.NetIO}}" 2>/dev/null | grep -E "NAME|veza" || echo "No containers running"
@$(ECHO_CMD) "" @$(ECHO_CMD) ""
@$(ECHO_CMD) "${BOLD}LOCAL PORTS:${NC}" @$(ECHO_CMD) "${BOLD}LOCAL PORTS:${NC}"
@lsof -i :$(PORT_backend-api) -i :$(PORT_chat-server) -i :$(PORT_stream-server) -i :$(PORT_web) 2>/dev/null | grep LISTEN || echo "No apps listening." @lsof -i :$(PORT_backend-api) -i :$(PORT_stream-server) -i :$(PORT_web) 2>/dev/null | grep LISTEN || echo "No apps listening."

View file

@ -30,8 +30,6 @@ install-deps: ## [LOW] Install code dependencies (all backends + npm workspaces)
@$(ECHO_CMD) "${BLUE}📦 Installing dependencies...${NC}" @$(ECHO_CMD) "${BLUE}📦 Installing dependencies...${NC}"
@$(ECHO_CMD) " -> [Go] Downloading modules..." @$(ECHO_CMD) " -> [Go] Downloading modules..."
@(cd $(ROOT)/$(SERVICE_DIR_backend-api) && go mod download) @(cd $(ROOT)/$(SERVICE_DIR_backend-api) && go mod download)
@$(ECHO_CMD) " -> [Rust Chat] Fetching crates..."
@(cd $(ROOT)/$(SERVICE_DIR_chat-server) && cargo fetch)
@$(ECHO_CMD) " -> [Rust Stream] Fetching crates..." @$(ECHO_CMD) " -> [Rust Stream] Fetching crates..."
@(cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo fetch) @(cd $(ROOT)/$(SERVICE_DIR_stream-server) && cargo fetch)
@$(ECHO_CMD) " -> [Web] Installing npm packages..." @$(ECHO_CMD) " -> [Web] Installing npm packages..."
@ -40,7 +38,7 @@ install-deps: ## [LOW] Install code dependencies (all backends + npm workspaces)
check-ports: ## [LOW] Check if ports are available check-ports: ## [LOW] Check if ports are available
@$(ECHO_CMD) "${BLUE}🔍 Checking ports...${NC}" @$(ECHO_CMD) "${BLUE}🔍 Checking ports...${NC}"
@for port in $(PORT_backend-api) $(PORT_chat-server) $(PORT_stream-server) $(PORT_web); do \ @for port in $(PORT_backend-api) $(PORT_stream-server) $(PORT_web); do \
if lsof -i :$$port -t >/dev/null 2>&1; then \ if lsof -i :$$port -t >/dev/null 2>&1; then \
$(ECHO_CMD) "${YELLOW}⚠️ Port $$port is busy${NC}"; \ $(ECHO_CMD) "${YELLOW}⚠️ Port $$port is busy${NC}"; \
else \ else \

View file

@ -3,7 +3,7 @@
# Usage: bash scripts/view_logs.sh [service] [options] # Usage: bash scripts/view_logs.sh [service] [options]
# #
# Services disponibles: # Services disponibles:
# backend-api, redis, db, rabbitmq, chat-server, stream-server, all # backend-api, redis, db, rabbitmq, stream-server, all
# #
# Options: # Options:
# -f, --follow Suivre les logs en temps réel (tail -f) # -f, --follow Suivre les logs en temps réel (tail -f)
@ -105,13 +105,6 @@ case $SERVICE in
view_log "$LOG_DIR/rabbitmq.log" "RabbitMQ - Tous les logs" view_log "$LOG_DIR/rabbitmq.log" "RabbitMQ - Tous les logs"
fi fi
;; ;;
chat-server)
if [ "$ERRORS_ONLY" = true ]; then
view_log "$LOG_DIR/chat-server-error.log" "Chat Server - Erreurs"
else
view_log "$LOG_DIR/chat-server.log" "Chat Server - Tous les logs"
fi
;;
stream-server) stream-server)
if [ "$ERRORS_ONLY" = true ]; then if [ "$ERRORS_ONLY" = true ]; then
view_log "$LOG_DIR/stream-server-error.log" "Stream Server - Erreurs" view_log "$LOG_DIR/stream-server-error.log" "Stream Server - Erreurs"
@ -127,7 +120,6 @@ case $SERVICE in
view_log "$LOG_DIR/redis-error.log" "Redis - Erreurs" view_log "$LOG_DIR/redis-error.log" "Redis - Erreurs"
view_log "$LOG_DIR/db-error.log" "Database - Erreurs" view_log "$LOG_DIR/db-error.log" "Database - Erreurs"
view_log "$LOG_DIR/rabbitmq-error.log" "RabbitMQ - Erreurs" view_log "$LOG_DIR/rabbitmq-error.log" "RabbitMQ - Erreurs"
view_log "$LOG_DIR/chat-server-error.log" "Chat Server - Erreurs"
view_log "$LOG_DIR/stream-server-error.log" "Stream Server - Erreurs" view_log "$LOG_DIR/stream-server-error.log" "Stream Server - Erreurs"
else else
echo "💡 Astuce: Utilisez -e pour voir uniquement les erreurs" echo "💡 Astuce: Utilisez -e pour voir uniquement les erreurs"
@ -143,7 +135,6 @@ case $SERVICE in
echo " - redis" echo " - redis"
echo " - db" echo " - db"
echo " - rabbitmq" echo " - rabbitmq"
echo " - chat-server"
echo " - stream-server" echo " - stream-server"
echo " - all (vue d'ensemble)" echo " - all (vue d'ensemble)"
exit 1 exit 1

View file

@ -19,7 +19,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: '1.23' go-version: '1.24'
check-latest: true check-latest: true
- name: Run tests with coverage - name: Run tests with coverage

View file

@ -25,7 +25,7 @@ jobs:
- name: Set up Go - name: Set up Go
uses: actions/setup-go@v5 uses: actions/setup-go@v5
with: with:
go-version: '1.23' go-version: '1.24'
check-latest: true check-latest: true
- name: Install govulncheck - name: Install govulncheck

View file

@ -1,5 +1,5 @@
# Build stage # Build stage
FROM golang:1.23-alpine AS builder FROM golang:1.24-alpine AS builder
WORKDIR /app WORKDIR /app

View file

@ -136,10 +136,6 @@ RATE_LIMIT_REDIS_URL=redis://:password@host:6379
PROMETHEUS_ENABLED=true PROMETHEUS_ENABLED=true
PROMETHEUS_PORT=9090 PROMETHEUS_PORT=9090
# Chat Integration
CHAT_SERVER_URL=http://chat-server:8081
CHAT_JWT_SECRET=<32+ character secret>
# Stream Server Integration # Stream Server Integration
STREAM_SERVER_URL=http://stream-server:8082 STREAM_SERVER_URL=http://stream-server:8082
@ -995,7 +991,7 @@ curl https://api.veza.com/healthz
```bash ```bash
# Update base image # Update base image
docker pull golang:1.23-alpine docker pull golang:1.24-alpine
# Rebuild with security updates # Rebuild with security updates
docker build --no-cache -f Dockerfile.production -t veza/backend-api:latest . docker build --no-cache -f Dockerfile.production -t veza/backend-api:latest .

View file

@ -1475,47 +1475,6 @@ docker logs veza-backend-api | grep -i "panic\|error\|fatal"
## Service Integration Issues ## Service Integration Issues
### Chat Server Integration
**Symptoms:**
- Cannot get chat token
- Chat connection fails
- WebSocket errors
**Diagnosis:**
```bash
# Check Chat Server
curl http://chat-server:8081/health
# Test chat token endpoint
curl http://localhost:8080/api/v1/chat/token \
-H "Authorization: Bearer $TOKEN"
```
**Common Causes:**
1. **Chat Server Down**
- Chat Server not running
- Cannot connect
2. **Invalid Token**
- Token generation failed
- Token format incorrect
**Solutions:**
1. **Start Chat Server**
```bash
docker-compose up -d chat-server
```
2. **Verify Integration**
```bash
# Check Chat Server URL
echo $CHAT_SERVER_URL
```
### Stream Server Integration ### Stream Server Integration
**Symptoms:** **Symptoms:**

View file

@ -1,7 +1,7 @@
//go:build ignore //go:build ignore
// +build ignore // +build ignore
// TODO: Réactiver chat_handlers après stabilisation du noyau et alignement des services (ChatService, MessageType, RoomType) // NOTE: Disabled (build ignore). Chat server removed; re-enable when chat service is reimplemented.
package handlers package handlers

View file

@ -83,7 +83,7 @@ func (c *Config) initMiddlewares() error {
// SetupMiddleware configure les middlewares globaux // SetupMiddleware configure les middlewares globaux
// DÉPRÉCIÉ : Cette méthode est conservée pour compatibilité mais ne fait plus rien // DÉPRÉCIÉ : Cette méthode est conservée pour compatibilité mais ne fait plus rien
// Les middlewares globaux sont maintenant configurés dans internal/api/router.go via APIRouter.Setup() // Les middlewares globaux sont maintenant configurés dans internal/api/router.go via APIRouter.Setup()
// TODO: Améliorer la configuration CORS dans api/router.go pour utiliser c.CORSOrigins depuis la config // NOTE: CORS could use c.CORSOrigins from config in api/router.go
func (c *Config) SetupMiddleware(router *gin.Engine) { func (c *Config) SetupMiddleware(router *gin.Engine) {
// No-op : Les middlewares sont configurés dans api/router.go // No-op : Les middlewares sont configurés dans api/router.go
// Cette méthode existe uniquement pour compatibilité avec cmd/main.go (legacy) // Cette méthode existe uniquement pour compatibilité avec cmd/main.go (legacy)

View file

@ -577,20 +577,20 @@ func (d *Database) GetUserByOAuthID(oauthID, provider string) (*models.User, err
// CreateUser crée un nouvel utilisateur // CreateUser crée un nouvel utilisateur
func (d *Database) CreateUser(user *models.User) error { func (d *Database) CreateUser(user *models.User) error {
// TODO: Implémenter avec vraie DB // NOTE: Stub for interface compatibility; main DB uses GORM
return fmt.Errorf("not implemented") return fmt.Errorf("not implemented")
} }
// UpdateUser met à jour un utilisateur existant // UpdateUser met à jour un utilisateur existant
func (d *Database) UpdateUser(user *models.User) error { func (d *Database) UpdateUser(user *models.User) error {
// TODO: Implémenter avec vraie DB // NOTE: Stub for interface compatibility; main DB uses GORM
return fmt.Errorf("not implemented") return fmt.Errorf("not implemented")
} }
// GetUserByID récupère un utilisateur par son ID // GetUserByID récupère un utilisateur par son ID
// MIGRATION UUID: Accepte maintenant uuid.UUID au lieu de int64 // MIGRATION UUID: Accepte maintenant uuid.UUID au lieu de int64
func (d *Database) GetUserByID(userID uuid.UUID) (*models.User, error) { func (d *Database) GetUserByID(userID uuid.UUID) (*models.User, error) {
// TODO: Implémenter avec vraie DB // NOTE: Stub for interface compatibility; main DB uses GORM
return nil, fmt.Errorf("not implemented") return nil, fmt.Errorf("not implemented")
} }

View file

@ -335,7 +335,7 @@ func (h *DashboardHandler) aggregateStats(auditStats []*services.AuditStats, per
} }
} }
// TODO: Calculate change percentages (compare current period to previous period) // NOTE: Change percentages (vs previous period) could be added
// This requires fetching stats for previous period, which is out of scope for initial implementation // This requires fetching stats for previous period, which is out of scope for initial implementation
// Change percentages can be added in a follow-up task // Change percentages can be added in a follow-up task

View file

@ -183,7 +183,7 @@ func (l *Logger) SetLevel(level zapcore.Level) error {
// Note: Cette implémentation est simplifiée car zap ne permet pas facilement // Note: Cette implémentation est simplifiée car zap ne permet pas facilement
// de changer le niveau d'un logger déjà créé sans AtomicLevel // de changer le niveau d'un logger déjà créé sans AtomicLevel
// Pour un changement dynamique complet, il faudrait recréer le logger // Pour un changement dynamique complet, il faudrait recréer le logger
// TODO: Implémenter avec AtomicLevel lors de la création du logger // NOTE: AtomicLevel could be used for dynamic log level changes
// Si le logger n'utilise pas AtomicLevel, on ne peut pas changer le niveau dynamiquement // Si le logger n'utilise pas AtomicLevel, on ne peut pas changer le niveau dynamiquement
// Dans ce cas, on retourne nil (pas d'erreur) car ce n'est pas critique // Dans ce cas, on retourne nil (pas d'erreur) car ce n'est pas critique

View file

@ -83,7 +83,7 @@ func (f *SecretFilterCore) Write(entry zapcore.Entry, fields []zapcore.Field) er
// Appeler Write sur le core sous-jacent avec les champs filtrés // Appeler Write sur le core sous-jacent avec les champs filtrés
// Le CheckedEntry appellera aussi f.core.Write après avec les champs ORIGINAUX // Le CheckedEntry appellera aussi f.core.Write après avec les champs ORIGINAUX
// Cela cause un double encodage, mais c'est le seul moyen de filtrer les secrets // Cela cause un double encodage, mais c'est le seul moyen de filtrer les secrets
// TODO: Trouver une solution pour éviter le double encodage // NOTE: Double encoding may occur; consider single-pass encoding
err := f.core.Write(entry, filteredFields) err := f.core.Write(entry, filteredFields)
// Ignore broken pipe errors in Write() to prevent zap from failing // Ignore broken pipe errors in Write() to prevent zap from failing
if err != nil && isBrokenPipeErrorSecretFilter(err) { if err != nil && isBrokenPipeErrorSecretFilter(err) {

View file

@ -46,8 +46,7 @@ var excludedRateLimitPaths = []string{
"/api/v1/healthz", "/api/v1/healthz",
"/api/v1/readyz", "/api/v1/readyz",
"/api/v1/csrf-token", "/api/v1/csrf-token",
"/api/v1/auth/register", // v0.903: login/register no longer excluded - subject to global rate limit (100 req/min) + endpoint-specific limiters
"/api/v1/auth/login",
// SEC-009, SEC-010: refresh and check-username have EndpointLimiter, not excluded // SEC-009, SEC-010: refresh and check-username have EndpointLimiter, not excluded
"/api/v1/auth/verify-email", "/api/v1/auth/verify-email",
"/api/v1/auth/resend-verification", "/api/v1/auth/resend-verification",

View file

@ -149,7 +149,7 @@ func (s *TrackSearchService) SearchTracks(ctx context.Context, params TrackSearc
return nil, 0, fmt.Errorf("failed to count tracks: %w", err) return nil, 0, fmt.Errorf("failed to count tracks: %w", err)
} }
// Apply sorting with computed fields // Apply sorting with computed fields (v0.903: whitelist for SQL injection prevention)
sortOrder := "DESC" sortOrder := "DESC"
if params.SortOrder == "asc" { if params.SortOrder == "asc" {
sortOrder = "ASC" sortOrder = "ASC"
@ -158,6 +158,14 @@ func (s *TrackSearchService) SearchTracks(ctx context.Context, params TrackSearc
if sortBy == "" { if sortBy == "" {
sortBy = "created_at" sortBy = "created_at"
} }
allowedSortFields := map[string]bool{
"created_at": true, "updated_at": true, "duration": true,
"title": true, "artist": true, "popularity": true,
"play_count": true, "like_count": true, "comment_count": true, "relevance": true,
}
if !allowedSortFields[sortBy] {
sortBy = "created_at"
}
// Handle different sorting options // Handle different sorting options
switch sortBy { switch sortBy {

View file

@ -795,3 +795,57 @@ func TestTrackSearchService_SearchTracks_SortByCommentCount(t *testing.T) {
assert.Equal(t, "Track With Comments", results[0].Title) // Most comments first assert.Equal(t, "Track With Comments", results[0].Title) // Most comments first
assert.Equal(t, "Track Without Comments", results[1].Title) assert.Equal(t, "Track Without Comments", results[1].Title)
} }
func TestTrackSearchService_SearchTracks_InvalidSortBy_FallbackToCreatedAt(t *testing.T) {
// v0.903: SQL injection prevention - invalid sortBy falls back to created_at DESC
service, db, userID, cleanup := setupTestTrackSearchService(t)
defer cleanup()
ctx := context.Background()
// Create test tracks
track1 := &models.Track{
UserID: userID,
Title: "First Track",
Artist: "Artist",
FilePath: "/test/track1.mp3",
FileSize: 5 * 1024 * 1024,
Format: "MP3",
Duration: 180,
Genre: "Rock",
IsPublic: true,
Status: models.TrackStatusCompleted,
}
err := db.Create(track1).Error
require.NoError(t, err)
track2 := &models.Track{
UserID: userID,
Title: "Second Track",
Artist: "Artist",
FilePath: "/test/track2.mp3",
FileSize: 6 * 1024 * 1024,
Format: "FLAC",
Duration: 200,
Genre: "Pop",
IsPublic: true,
Status: models.TrackStatusCompleted,
}
err = db.Create(track2).Error
require.NoError(t, err)
// Malicious sortBy - should fallback to created_at DESC (no error, no SQL injection)
maliciousSortBy := "invalid'; DROP TABLE tracks;--"
results, total, err := service.SearchTracks(ctx, TrackSearchParams{
SortBy: maliciousSortBy,
Page: 1,
Limit: 10,
})
assert.NoError(t, err)
assert.Equal(t, int64(2), total)
assert.Len(t, results, 2)
// Should return results (fallback to created_at DESC applied, no injection)
assert.Contains(t, []string{results[0].Title, results[1].Title}, "First Track")
assert.Contains(t, []string{results[0].Title, results[1].Title}, "Second Track")
}

View file

@ -225,7 +225,7 @@ func TestInternalTrackStreamCallbackRoutes(t *testing.T) {
// applies to all routes. This is a known issue that should be fixed by applying middleware // applies to all routes. This is a known issue that should be fixed by applying middleware
// only to specific legacy routes, not via a global group. // only to specific legacy routes, not via a global group.
// For now, we accept that modern routes may have the Deprecated header due to this bug. // For now, we accept that modern routes may have the Deprecated header due to this bug.
// TODO: Fix router configuration to only apply DeprecationWarning to legacy routes // NOTE: DeprecationWarning could be scoped to legacy routes only
_ = w.Header().Get("Deprecated") // Check exists but don't assert (known bug) _ = w.Header().Get("Deprecated") // Check exists but don't assert (known bug)
}) })
} }

View file

@ -1,7 +1,7 @@
//! Veza Common Library //! Veza Common Library
//! //!
//! This library provides common types and utilities shared across //! This library provides common types and utilities shared across
//! all Veza services (backend, frontend, chat-server, stream-server). //! all Veza services (backend, frontend, stream-server).
pub mod types; pub mod types;
pub mod utils; pub mod utils;

View file

@ -9,7 +9,6 @@ set -euo pipefail
# Configuration # Configuration
NAMESPACE="veza-production" NAMESPACE="veza-production"
SERVICE_NAME="veza-stream-server" SERVICE_NAME="veza-stream-server"
CHAT_SERVICE="veza-chat-server"
# Couleurs # Couleurs
RED='\033[0;31m' RED='\033[0;31m'
@ -73,28 +72,12 @@ check_deployments() {
FAILED_CHECKS=$((FAILED_CHECKS + 1)) FAILED_CHECKS=$((FAILED_CHECKS + 1))
fi fi
TOTAL_CHECKS=$((TOTAL_CHECKS + 1)) TOTAL_CHECKS=$((TOTAL_CHECKS + 1))
# Chat Server
local chat_ready
chat_ready=$(kubectl get deployment $CHAT_SERVICE -n $NAMESPACE -o jsonpath='{.status.readyReplicas}' 2>/dev/null || echo "0")
local chat_desired
chat_desired=$(kubectl get deployment $CHAT_SERVICE -n $NAMESPACE -o jsonpath='{.spec.replicas}' 2>/dev/null || echo "0")
if [ "$chat_ready" -eq "$chat_desired" ] && [ "$chat_desired" -gt 0 ]; then
log_success "Chat Server: $chat_ready/$chat_desired pods prêts"
PASSED_CHECKS=$((PASSED_CHECKS + 1))
else
log_error "Chat Server: $chat_ready/$chat_desired pods prêts"
FAILED_CHECKS=$((FAILED_CHECKS + 1))
fi
TOTAL_CHECKS=$((TOTAL_CHECKS + 1))
} }
check_services() { check_services() {
log_info "=== Vérification des services ===" log_info "=== Vérification des services ==="
run_check "Service Stream Server" "kubectl get service $SERVICE_NAME -n $NAMESPACE" run_check "Service Stream Server" "kubectl get service $SERVICE_NAME -n $NAMESPACE"
run_check "Service Chat Server" "kubectl get service $CHAT_SERVICE -n $NAMESPACE"
} }
check_health_endpoints() { check_health_endpoints() {
@ -118,25 +101,6 @@ check_health_endpoints() {
FAILED_CHECKS=$((FAILED_CHECKS + 1)) FAILED_CHECKS=$((FAILED_CHECKS + 1))
fi fi
TOTAL_CHECKS=$((TOTAL_CHECKS + 1)) TOTAL_CHECKS=$((TOTAL_CHECKS + 1))
# Chat Server health
local chat_ip
chat_ip=$(kubectl get service $CHAT_SERVICE -n $NAMESPACE -o jsonpath='{.spec.clusterIP}' 2>/dev/null || echo "")
if [ -n "$chat_ip" ]; then
if kubectl run health-test-chat --rm -i --restart=Never --image=curlimages/curl -- \
curl -f -m 10 "http://$chat_ip:8080/health" &> /dev/null; then
log_success "Chat Server health endpoint"
PASSED_CHECKS=$((PASSED_CHECKS + 1))
else
log_error "Chat Server health endpoint"
FAILED_CHECKS=$((FAILED_CHECKS + 1))
fi
else
log_error "Chat Server IP non trouvée"
FAILED_CHECKS=$((FAILED_CHECKS + 1))
fi
TOTAL_CHECKS=$((TOTAL_CHECKS + 1))
} }
check_database_connectivity() { check_database_connectivity() {
@ -175,9 +139,6 @@ check_resource_usage() {
echo "Pods Stream Server:" echo "Pods Stream Server:"
kubectl top pods -n $NAMESPACE -l app=$SERVICE_NAME 2>/dev/null || log_warning "Metrics server non disponible" kubectl top pods -n $NAMESPACE -l app=$SERVICE_NAME 2>/dev/null || log_warning "Metrics server non disponible"
echo "Pods Chat Server:"
kubectl top pods -n $NAMESPACE -l app=$CHAT_SERVICE 2>/dev/null || log_warning "Metrics server non disponible"
echo "Utilisation des nœuds:" echo "Utilisation des nœuds:"
kubectl top nodes 2>/dev/null || log_warning "Metrics server non disponible" kubectl top nodes 2>/dev/null || log_warning "Metrics server non disponible"
} }
@ -187,9 +148,6 @@ check_recent_logs() {
echo "Logs Stream Server (dernières 10 lignes):" echo "Logs Stream Server (dernières 10 lignes):"
kubectl logs -n $NAMESPACE -l app=$SERVICE_NAME --tail=10 2>/dev/null || log_warning "Logs non disponibles" kubectl logs -n $NAMESPACE -l app=$SERVICE_NAME --tail=10 2>/dev/null || log_warning "Logs non disponibles"
echo "Logs Chat Server (dernières 10 lignes):"
kubectl logs -n $NAMESPACE -l app=$CHAT_SERVICE --tail=10 2>/dev/null || log_warning "Logs non disponibles"
} }
# Rapport final # Rapport final

View file

@ -266,7 +266,7 @@ impl AnalyticsEngine {
completion_percentage: 0.0, completion_percentage: 0.0,
quality: quality.clone(), quality: quality.clone(),
platform: platform.clone(), platform: platform.clone(),
location: None, // TODO: Géolocalisation IP location: None, // NOTE: IP geolocation could be added
referrer, referrer,
ended: false, ended: false,
skip_reason: None, skip_reason: None,

View file

@ -1,5 +1,5 @@
pub mod audio_cache; pub mod audio_cache;
// pub mod adaptive; // TODO: Implement adaptive cache // pub mod adaptive; // NOTE: Adaptive cache implementation deferred
// Type alias pour compatibilité avec main.rs // Type alias pour compatibilité avec main.rs
pub type FileCache = audio_cache::AudioCache; pub type FileCache = audio_cache::AudioCache;

View file

@ -410,7 +410,7 @@ pub async fn get_job_status_detailed(
current_duration, current_duration,
progress, progress,
created_at: job.get::<chrono::DateTime<chrono::Utc>, _>("created_at").to_rfc3339(), created_at: job.get::<chrono::DateTime<chrono::Utc>, _>("created_at").to_rfc3339(),
started_at: None, // TODO: Ajouter started_at dans stream_jobs si nécessaire started_at: None, // NOTE: Add started_at to stream_jobs if needed
completed_at: if job.get::<String, _>("status") == "done" { completed_at: if job.get::<String, _>("status") == "done" {
Some(job.get::<chrono::DateTime<chrono::Utc>, _>("updated_at").to_rfc3339()) Some(job.get::<chrono::DateTime<chrono::Utc>, _>("updated_at").to_rfc3339())
} else { } else {

View file

@ -617,7 +617,7 @@ pub fn init_logging_from_config(config: &crate::config::Config) -> Result<Struct
let logging_config = LoggingConfig { let logging_config = LoggingConfig {
level: config.monitoring.log_level.clone(), level: config.monitoring.log_level.clone(),
format: config.monitoring.log_format.clone(), format: config.monitoring.log_format.clone(),
file: None, // TODO: Ajouter configuration de fichier si nécessaire file: None, // NOTE: File output config could be added if needed
rotation: None, rotation: None,
filters: vec![], filters: vec![],
}; };