Backend Go: - Remplacement complet des anciennes migrations par la base V1 alignée sur ORIGIN. - Durcissement global du parsing JSON (BindAndValidateJSON + RespondWithAppError). - Sécurisation de config.go, CORS, statuts de santé et monitoring. - Implémentation des transactions P0 (RBAC, duplication de playlists, social toggles). - Ajout d’un job worker structuré (emails, analytics, thumbnails) + tests associés. - Nouvelle doc backend : AUDIT_CONFIG, BACKEND_CONFIG, AUTH_PASSWORD_RESET, JOB_WORKER_*. Chat server (Rust): - Refonte du pipeline JWT + sécurité, audit et rate limiting avancé. - Implémentation complète du cycle de message (read receipts, delivered, edit/delete, typing). - Nettoyage des panics, gestion d’erreurs robuste, logs structurés. - Migrations chat alignées sur le schéma UUID et nouvelles features. Stream server (Rust): - Refonte du moteur de streaming (encoding pipeline + HLS) et des modules core. - Transactions P0 pour les jobs et segments, garanties d’atomicité. - Documentation détaillée de la pipeline (AUDIT_STREAM_*, DESIGN_STREAM_PIPELINE, TRANSACTIONS_P0_IMPLEMENTATION). Documentation & audits: - TRIAGE.md et AUDIT_STABILITY.md à jour avec l’état réel des 3 services. - Cartographie complète des migrations et des transactions (DB_MIGRATIONS_*, DB_TRANSACTION_PLAN, AUDIT_DB_TRANSACTIONS, TRANSACTION_TESTS_PHASE3). - Scripts de reset et de cleanup pour la lab DB et la V1. Ce commit fige l’ensemble du travail de stabilisation P0 (UUID, backend, chat et stream) avant les phases suivantes (Coherence Guardian, WS hardening, etc.).
58 KiB
ORIGIN_MASTER_ARCHITECTURE.md
📋 RÉSUMÉ EXÉCUTIF
Ce document définit l'architecture technique complète et définitive de la plateforme Veza (Talas) pour les 24 prochains mois. Il constitue la source de vérité absolue pour toutes les décisions architecturales et ne peut être modifié sans processus de change management formel. L'architecture cible une plateforme audio collaborative combinant streaming, marketplace, social, et éducation avec une capacité de 100,000+ utilisateurs concurrents.
🎯 OBJECTIFS
Objectif Principal
Définir une architecture microservices scalable, sécurisée et performante capable de supporter 600+ fonctionnalités réparties sur 21 modules métier avec des SLA de 99.9% uptime et <100ms latency.
Objectifs Secondaires
- Assurer la cohérence architecturale sur 24 mois de développement
- Documenter toutes les décisions architecturales (ADR)
- Établir les standards de communication inter-services
- Définir les patterns de sécurité, performance et observabilité
📖 TABLE DES MATIÈRES
- Vue d'Ensemble Architecturale
- Architecture des Services
- Infrastructure et Déploiement
- Flux de Données
- Sécurité
- Performance et Scalabilité
- Observabilité
- Décisions Architecturales (ADR)
- Conventions de Nommage
- Structure des Répertoires
🔒 RÈGLES IMMUABLES
Les décisions suivantes sont ABSOLUES et NON NÉGOCIABLES pour les 24 prochains mois:
- Backend API Principal: Go 1.23+ uniquement (stabilité, performance, typage fort)
- Services Temps Réel: Rust 1.75+ uniquement (sécurité mémoire, performance critique)
- Frontend: React 18+ avec TypeScript 5.3+ strict mode (pas de JavaScript pur)
- Base de Données Principale: PostgreSQL 15+ (ACID, relations, performance)
- Cache: Redis 7+ (in-memory, pub/sub, sessions)
- Communication Services: gRPC pour inter-service, REST pour clients externes
- Message Queue: RabbitMQ 3.12+ pour événements asynchrones
- Container Runtime: Docker 24+ (standardisation, portabilité)
- API Gateway: Traefik 2.10+ (moderne, cloud-native, labels)
- Monitoring: Prometheus + Grafana + Loki (standard industrie)
1. VUE D'ENSEMBLE ARCHITECTURALE
1.1 Architecture Globale
graph TB
subgraph "Frontend Layer"
WEB[Web App React]
MOBILE[Mobile Apps RN]
DESKTOP[Desktop Electron]
end
subgraph "API Gateway Layer"
TRAEFIK[Traefik API Gateway]
LB[Load Balancer]
end
subgraph "Backend Services"
API[API Backend Go]
CHAT[Chat Server Rust]
STREAM[Stream Server Rust]
WORKER[Background Workers]
end
subgraph "Data Layer"
PG[(PostgreSQL)]
REDIS[(Redis Cluster)]
S3[(Object Storage S3)]
ES[(Elasticsearch)]
end
subgraph "Message Layer"
RABBITMQ[RabbitMQ]
KAFKA[Kafka Optional]
end
subgraph "External Services"
STRIPE[Stripe Payments]
SENDGRID[SendGrid Email]
CDN[CloudFront CDN]
end
WEB --> TRAEFIK
MOBILE --> TRAEFIK
DESKTOP --> TRAEFIK
TRAEFIK --> API
TRAEFIK --> CHAT
TRAEFIK --> STREAM
API --> PG
API --> REDIS
API --> S3
API --> RABBITMQ
API --> STRIPE
API --> SENDGRID
CHAT --> PG
CHAT --> REDIS
CHAT --> RABBITMQ
STREAM --> PG
STREAM --> REDIS
STREAM --> S3
STREAM --> CDN
WORKER --> RABBITMQ
WORKER --> PG
WORKER --> S3
API --> ES
ES -.-> PG
1.2 Patterns Architecturaux
1.2.1 Clean Architecture (Hexagonal)
Tous les services suivent le pattern Clean Architecture avec séparation stricte en couches:
src/
├── domain/ # Entités métier, règles business (indépendant)
│ ├── entities/ # User, Track, Playlist, etc.
│ ├── value_objects/ # Email, Money, Duration, etc.
│ └── interfaces/ # Ports (abstractions)
├── application/ # Cas d'usage, services applicatifs
│ ├── use_cases/ # CreateUser, UploadTrack, etc.
│ ├── services/ # AuthService, PaymentService, etc.
│ └── dto/ # Data Transfer Objects
├── infrastructure/ # Implémentations techniques
│ ├── persistence/ # Repositories PostgreSQL
│ ├── cache/ # Redis adapters
│ ├── messaging/ # RabbitMQ publishers/consumers
│ ├── storage/ # S3 file storage
│ └── external/ # Stripe, SendGrid, etc.
└── interfaces/ # Adapters externes
├── http/ # REST handlers (Gin)
├── grpc/ # gRPC handlers (Tonic)
└── websocket/ # WebSocket handlers (Axum)
Règle de Dépendance: Les dépendances pointent TOUJOURS vers l'intérieur
interfaces→application→domaininfrastructure→application→domaindomainne dépend de RIEN (pure business logic)
1.2.2 Domain-Driven Design (DDD)
Bounded Contexts (21 Domaines)
-
Authentication & Security (
auth)- Aggregates: User, Session, MFAConfig
- Events: UserRegistered, UserLoggedIn, PasswordChanged
-
User Profiles (
profiles)- Aggregates: UserProfile, Badge, Achievement
- Events: ProfileUpdated, BadgeEarned
-
File Management (
files)- Aggregates: File, Upload, Metadata
- Events: FileUploaded, FileProcessed, FileDeleted
-
Audio Streaming (
streaming)- Aggregates: Track, Playlist, Queue
- Events: TrackPlayed, PlaylistCreated, QueueUpdated
-
Chat & Messaging (
chat)- Aggregates: Conversation, Message, Room
- Events: MessageSent, RoomCreated, UserJoinedRoom
-
Social & Community (
social)- Aggregates: Follow, Post, Comment, Like
- Events: UserFollowed, PostCreated, CommentAdded
-
Marketplace (
marketplace)- Aggregates: Product, License, Order
- Events: ProductListed, OrderPlaced, PaymentProcessed
-
Education (
education)- Aggregates: Course, Lesson, Enrollment
- Events: CourseCreated, EnrollmentStarted, LessonCompleted
-
Hardware Management (
hardware)- Aggregates: Equipment, Warranty, Maintenance
- Events: EquipmentAdded, WarrantyExpiring
-
Cloud Storage (
cloud)- Aggregates: CloudAccount, SyncJob, Backup
- Events: BackupCompleted, SyncFailed
-
Search & Discovery (
search)- Aggregates: SearchQuery, SearchResult, Recommendation
- Events: SearchPerformed, RecommendationGenerated
-
Analytics (
analytics)- Aggregates: Metric, Report, Dashboard
- Events: MetricRecorded, ReportGenerated
-
Administration (
admin)- Aggregates: AdminUser, ModeratorAction, SystemConfig
- Events: UserBanned, ContentRemoved
-
UI/UX (
ui)- Aggregates: Theme, Layout, Preference
- Events: ThemeChanged, LayoutCustomized
-
AI & Advanced Features (
ai)- Aggregates: MasteringJob, StemSeparation, AIModel
- Events: AudioProcessed, ModelTrained
-
Live Streaming (
live)- Aggregates: LiveStream, StreamSession, Viewer
- Events: StreamStarted, ViewerJoined
-
Collaboration (
collab)- Aggregates: Project, Version, Contributor
- Events: ProjectCreated, VersionCommitted
-
Blockchain/Web3 (
web3)- Aggregates: NFT, SmartContract, Wallet
- Events: NFTMinted, RoyaltyDistributed
-
External Integrations (
integrations)- Aggregates: Integration, Connection, Sync
- Events: IntegrationConnected, SyncCompleted
-
Mobile/Desktop Apps (
native)- Aggregates: Device, AppSession, Notification
- Events: DeviceRegistered, NotificationSent
-
Gamification (
gamification)- Aggregates: XP, Level, Achievement, Leaderboard
- Events: XPGained, LevelUp, AchievementUnlocked
1.2.3 Event-Driven Architecture
graph LR
SERVICE_A[Service A] -->|Publish Event| RABBITMQ[RabbitMQ Exchange]
RABBITMQ -->|Route| QUEUE_B[Queue B]
RABBITMQ -->|Route| QUEUE_C[Queue C]
QUEUE_B --> SERVICE_B[Service B Subscribe]
QUEUE_C --> SERVICE_C[Service C Subscribe]
SERVICE_B -->|Store| EVENT_STORE[(Event Store)]
SERVICE_C -->|Store| EVENT_STORE
Event Naming Convention: {Domain}.{Entity}.{Action}.{Version}
- Examples:
auth.user.registered.v1,marketplace.order.paid.v1
Event Structure (JSON):
{
"event_id": "uuid-v4",
"event_type": "auth.user.registered.v1",
"aggregate_id": "user-123",
"aggregate_type": "User",
"timestamp": "2025-11-02T10:30:00Z",
"version": 1,
"data": {
"user_id": "123",
"email": "user@example.com",
"username": "johndoe"
},
"metadata": {
"correlation_id": "request-456",
"causation_id": "command-789",
"user_agent": "Mozilla/5.0..."
}
}
1.2.4 CQRS (Command Query Responsibility Segregation)
Séparation stricte Write/Read:
Commands (Write) Queries (Read)
↓ ↑
Write Model Read Model
↓ ↑
PostgreSQL (Master) Redis + Elasticsearch
Command Example (Go):
type CreateUserCommand struct {
CommandID string
Username string
Email string
Password string
FirstName string
LastName string
}
type CreateUserHandler struct {
repo UserRepository
bus EventBus
}
func (h *CreateUserHandler) Handle(ctx context.Context, cmd CreateUserCommand) error {
// 1. Validate
if err := cmd.Validate(); err != nil {
return ErrInvalidCommand
}
// 2. Create aggregate
user, err := domain.NewUser(cmd.Username, cmd.Email, cmd.Password)
if err != nil {
return err
}
// 3. Persist
if err := h.repo.Save(ctx, user); err != nil {
return err
}
// 4. Publish events
for _, event := range user.UncommittedEvents() {
h.bus.Publish(ctx, event)
}
return nil
}
Query Example (Go):
type GetUserQuery struct {
UserID string
}
type GetUserQueryHandler struct {
cache *redis.Client
search *elasticsearch.Client
}
func (h *GetUserQueryHandler) Handle(ctx context.Context, q GetUserQuery) (*UserDTO, error) {
// 1. Try cache first
cached, err := h.cache.Get(ctx, "user:"+q.UserID).Result()
if err == nil {
return unmarshal(cached)
}
// 2. Query read model
user, err := h.search.Get(ctx, "users", q.UserID)
if err != nil {
return nil, ErrUserNotFound
}
// 3. Cache for next time
h.cache.Set(ctx, "user:"+q.UserID, user, 5*time.Minute)
return toDTO(user), nil
}
2. ARCHITECTURE DES SERVICES
2.1 Backend API (Go)
Repository: /veza-backend-api
Port: 8080
Protocol: HTTP/2 (REST), gRPC
Language: Go 1.23+
Framework: Gin 1.9+ (HTTP), gRPC-Go 1.59+
2.1.1 Responsabilités
- Authentification/Autorisation (JWT, OAuth2, 2FA)
- Gestion utilisateurs, profils, rôles
- API REST pour clients web/mobile
- Business logic core
- Orchestration des workflows
- Gestion des transactions
- API publique documentée (OpenAPI)
2.1.2 Endpoints Principaux
Base URL: https://api.veza.app/v1
| Groupe | Endpoints | Méthodes | Auth Required |
|---|---|---|---|
| Auth | /auth/* |
POST | Partial |
| - Register | /auth/register |
POST | No |
| - Login | /auth/login |
POST | No |
| - Logout | /auth/logout |
POST | Yes |
| - Refresh | /auth/refresh |
POST | Yes (Refresh Token) |
| - 2FA Setup | /auth/2fa/setup |
POST | Yes |
| - 2FA Verify | /auth/2fa/verify |
POST | Yes |
| - Password Reset | /auth/password/reset |
POST | No |
| Users | /users/* |
GET, PUT, DELETE | Yes |
| - Get Profile | /users/{id} |
GET | Yes |
| - Update Profile | /users/{id} |
PUT | Yes |
| - Delete Account | /users/{id} |
DELETE | Yes |
| - Upload Avatar | /users/{id}/avatar |
POST | Yes |
| Tracks | /tracks/* |
GET, POST, PUT, DELETE | Partial |
| - List Tracks | /tracks |
GET | No |
| - Get Track | /tracks/{id} |
GET | No |
| - Upload Track | /tracks |
POST | Yes |
| - Update Metadata | /tracks/{id} |
PUT | Yes (Owner) |
| - Delete Track | /tracks/{id} |
DELETE | Yes (Owner) |
| Playlists | /playlists/* |
GET, POST, PUT, DELETE | Partial |
| Marketplace | /marketplace/* |
GET, POST | Partial |
| Search | /search/* |
GET | No |
| Admin | /admin/* |
ALL | Yes (Admin Role) |
2.1.3 Structure Interne
veza-backend-api/
├── cmd/
│ └── api/
│ └── main.go # Entry point
├── internal/
│ ├── domain/ # Domain layer (pure business)
│ │ ├── user/
│ │ │ ├── user.go # User aggregate
│ │ │ ├── repository.go # User repository interface
│ │ │ └── events.go # Domain events
│ │ ├── track/
│ │ ├── playlist/
│ │ └── shared/
│ │ ├── errors.go
│ │ └── value_objects.go
│ ├── application/ # Application layer
│ │ ├── commands/
│ │ │ ├── create_user.go
│ │ │ └── upload_track.go
│ │ ├── queries/
│ │ │ ├── get_user.go
│ │ │ └── search_tracks.go
│ │ └── services/
│ │ ├── auth_service.go
│ │ ├── payment_service.go
│ │ └── notification_service.go
│ ├── infrastructure/ # Infrastructure layer
│ │ ├── persistence/
│ │ │ ├── postgres/
│ │ │ │ ├── user_repository.go
│ │ │ │ └── migrations/
│ │ │ └── redis/
│ │ │ └── cache_repository.go
│ │ ├── messaging/
│ │ │ └── rabbitmq/
│ │ │ ├── publisher.go
│ │ │ └── consumer.go
│ │ ├── storage/
│ │ │ └── s3/
│ │ │ └── file_storage.go
│ │ └── external/
│ │ ├── stripe/
│ │ └── sendgrid/
│ └── interfaces/ # Interfaces layer
│ ├── http/
│ │ ├── handlers/
│ │ │ ├── auth_handler.go
│ │ │ └── user_handler.go
│ │ ├── middleware/
│ │ │ ├── auth.go
│ │ │ ├── rate_limit.go
│ │ │ └── cors.go
│ │ └── routes.go
│ └── grpc/
│ └── server.go
├── pkg/ # Public packages
│ ├── logger/
│ ├── validator/
│ └── jwt/
├── migrations/ # Database migrations
│ ├── 001_create_users.sql
│ └── 002_create_tracks.sql
├── go.mod
└── go.sum
2.1.4 Technologies & Dépendances
// go.mod (exact versions LOCKED)
module veza-backend-api
go 1.23.8
require (
github.com/gin-gonic/gin v1.9.1 // HTTP framework
github.com/google/uuid v1.6.0 // UUID generation
github.com/golang-jwt/jwt/v5 v5.3.0 // JWT tokens
github.com/lib/pq v1.10.9 // PostgreSQL driver
gorm.io/gorm v1.25.5 // ORM
gorm.io/driver/postgres v1.5.4 // GORM PostgreSQL
github.com/redis/go-redis/v9 v9.16.0 // Redis client
github.com/rabbitmq/amqp091-go v1.9.0 // RabbitMQ
github.com/aws/aws-sdk-go-v2 v1.24.0 // AWS S3
github.com/stripe/stripe-go/v76 v76.16.0 // Stripe payments
go.uber.org/zap v1.27.0 // Structured logging
github.com/prometheus/client_golang v1.18.0 // Metrics
google.golang.org/grpc v1.59.0 // gRPC
github.com/spf13/viper v1.18.2 // Configuration
golang.org/x/crypto v0.41.0 // Bcrypt, argon2
)
2.2 Chat Server (Rust)
Repository: /veza-chat-server
Port: 8081
Protocol: WebSocket (WS/WSS), gRPC
Language: Rust 1.75+
Framework: Axum 0.8+, Tokio 1.35+
2.2.1 Responsabilités
- Chat temps réel (WebSocket)
- Messagerie 1-to-1 et groupes
- Rooms publiques/privées
- Notifications push
- Présence utilisateurs (online/offline)
- Message history et recherche
- File sharing dans chat
- Reactions et threads
- End-to-end encryption (optionnel)
2.2.2 Architecture WebSocket
sequenceDiagram
participant Client
participant WSServer
participant RoomManager
participant MessageStore
participant RabbitMQ
Client->>WSServer: WebSocket Upgrade
WSServer->>Client: 101 Switching Protocols
Client->>WSServer: Auth Token
WSServer->>RoomManager: Register Connection
Client->>WSServer: Send Message
WSServer->>MessageStore: Store Message
MessageStore->>RabbitMQ: Publish Event
WSServer->>RoomManager: Broadcast to Room
RoomManager->>Client: Deliver Message
2.2.3 Message Protocol
WebSocket Message Format (JSON):
{
"type": "message",
"action": "send",
"data": {
"room_id": "room-123",
"content": "Hello World!",
"message_type": "text",
"reply_to": null,
"metadata": {}
},
"request_id": "req-456",
"timestamp": "2025-11-02T10:30:00Z"
}
Server Response:
{
"type": "message",
"action": "received",
"data": {
"message_id": "msg-789",
"room_id": "room-123",
"sender_id": "user-123",
"content": "Hello World!",
"created_at": "2025-11-02T10:30:00Z"
},
"request_id": "req-456"
}
2.2.4 Structure Interne
veza-chat-server/
├── src/
│ ├── main.rs # Entry point
│ ├── lib.rs # Library root
│ ├── config.rs # Configuration
│ ├── websocket/
│ │ ├── mod.rs
│ │ ├── handler.rs # WebSocket handler
│ │ ├── connection.rs # Connection management
│ │ └── protocol.rs # Message protocol
│ ├── rooms/
│ │ ├── mod.rs
│ │ ├── manager.rs # Room management
│ │ └── state.rs # Room state
│ ├── messages/
│ │ ├── mod.rs
│ │ ├── store.rs # Message persistence
│ │ └── search.rs # Message search
│ ├── presence/
│ │ ├── mod.rs
│ │ └── tracker.rs # User presence
│ ├── auth/
│ │ ├── mod.rs
│ │ └── jwt.rs # JWT validation
│ ├── models/
│ │ ├── mod.rs
│ │ ├── message.rs
│ │ ├── room.rs
│ │ └── user.rs
│ └── infrastructure/
│ ├── database.rs # PostgreSQL
│ ├── cache.rs # Redis
│ └── messaging.rs # RabbitMQ
├── migrations/
│ ├── 001_create_rooms.sql
│ └── 002_create_messages.sql
├── Cargo.toml
└── Cargo.lock
2.2.5 Technologies & Dépendances
[dependencies]
# Async runtime
tokio = { version = "1.35", features = ["full", "tracing"] }
axum = { version = "0.8", features = ["macros", "ws"] }
# WebSocket
tokio-tungstenite = "0.21"
futures-util = "0.3"
# Database
sqlx = { version = "0.8", features = ["postgres", "runtime-tokio-native-tls", "uuid", "chrono", "json"] }
redis = { version = "0.32", features = ["tokio-comp", "connection-manager"] }
# Serialization
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
# Security
jsonwebtoken = "9.2"
bcrypt = "0.17"
# Messaging
lapin = "2.3" # RabbitMQ client
# Monitoring
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter", "json"] }
# Utilities
uuid = { version = "1.6", features = ["v4", "serde"] }
chrono = { version = "0.4", features = ["serde"] }
dashmap = "6.1" # Concurrent HashMap
2.3 Stream Server (Rust)
Repository: /veza-stream-server
Port: 8082
Protocol: HTTP/2, HLS, WebRTC
Language: Rust 1.75+
Framework: Axum 0.7+, Tokio 1.0+
2.3.1 Responsabilités
- Streaming audio haute performance
- Transcoding multi-format (MP3, FLAC, AAC, OGG)
- Adaptive bitrate streaming (HLS)
- Audio processing (normalization, EQ)
- Waveform generation
- CDN integration
- DRM protection (optionnel)
- Live streaming support
- Low-latency playback (<50ms)
2.3.2 Streaming Pipeline
graph LR
UPLOAD[File Upload] --> VALIDATE[Validation]
VALIDATE --> TRANSCODE[Transcoding]
TRANSCODE --> NORMALIZE[Audio Normalization]
NORMALIZE --> WAVEFORM[Waveform Generation]
WAVEFORM --> SEGMENTS[HLS Segmentation]
SEGMENTS --> CDN[CDN Upload]
CDN --> READY[Ready to Stream]
2.3.3 Supported Formats
| Input Formats | Output Formats | Bitrates | Sample Rates |
|---|---|---|---|
| WAV | MP3 | 128, 192, 256, 320 kbps | 44.1, 48 kHz |
| FLAC | AAC | 128, 192, 256, 320 kbps | 44.1, 48, 96 kHz |
| MP3 | OGG Vorbis | 128, 192, 256, 320 kbps | 44.1, 48 kHz |
| M4A | FLAC | Lossless | 44.1, 48, 96, 192 kHz |
| AIFF | OPUS | 64, 96, 128, 192 kbps | 48 kHz |
2.3.4 HLS Streaming
Playlist Structure (.m3u8):
#EXTM3U
#EXT-X-VERSION:3
#EXT-X-TARGETDURATION:10
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:10.0,
segment0.ts
#EXTINF:10.0,
segment1.ts
#EXTINF:10.0,
segment2.ts
#EXT-X-ENDLIST
Adaptive Bitrate:
#EXTM3U
#EXT-X-STREAM-INF:BANDWIDTH=128000,RESOLUTION=0x0
stream_128k.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=192000,RESOLUTION=0x0
stream_192k.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=320000,RESOLUTION=0x0
stream_320k.m3u8
2.3.5 Technologies & Dépendances
[dependencies]
# Async runtime
tokio = { version = "1.0", features = ["full"] }
axum = { version = "0.7", features = ["macros", "multipart", "ws"] }
# Audio processing
symphonia = { version = "0.5", features = ["all"] }
hound = "3.5" # WAV
minimp3 = "0.5" # MP3 decoder
rubato = "0.15" # Resampling
# FFT and signal processing
rustfft = "6.2"
dasp = "0.11"
# Streaming protocols
m3u8-rs = "5.0" # HLS playlists
# Database
sqlx = { version = "0.7", features = ["postgres", "runtime-tokio-rustls", "uuid", "chrono"] }
redis = { version = "0.25", features = ["tokio-comp", "connection-manager"] }
# Storage
reqwest = { version = "0.11", features = ["json", "stream"] }
# Compression
brotli = "3.4"
lz4_flex = "0.11"
# Utilities
uuid = { version = "1.6", features = ["v4", "serde"] }
chrono = { version = "0.4", features = ["serde"] }
bytes = "1.5"
2.4 Frontend Web (React)
Repository: /apps/web
Port: 5176 (dev), 443 (prod)
Protocol: HTTPS
Language: TypeScript 5.3+
Framework: React 18+, Vite 7+
2.4.1 Responsabilités
- Interface utilisateur web
- State management (Zustand)
- API client (Axios + React Query)
- WebSocket client (chat, notifications)
- Audio player UI
- Forms avec validation (React Hook Form + Zod)
- Internationalisation (i18next)
- Responsive design (Tailwind CSS)
- Progressive Web App (PWA)
2.4.2 Architecture Frontend
apps/web/
├── public/
│ ├── manifest.json
│ └── service-worker.js
├── src/
│ ├── main.tsx # Entry point
│ ├── App.tsx # Root component
│ ├── features/ # Feature-based organization
│ │ ├── auth/
│ │ │ ├── components/
│ │ │ │ ├── LoginForm.tsx
│ │ │ │ └── RegisterForm.tsx
│ │ │ ├── hooks/
│ │ │ │ └── useAuth.ts
│ │ │ ├── services/
│ │ │ │ └── authService.ts
│ │ │ └── types.ts
│ │ ├── player/
│ │ ├── chat/
│ │ ├── marketplace/
│ │ └── dashboard/
│ ├── components/ # Shared components
│ │ ├── ui/ # Base UI components
│ │ │ ├── Button.tsx
│ │ │ ├── Input.tsx
│ │ │ └── Card.tsx
│ │ └── layout/
│ │ ├── Header.tsx
│ │ ├── Sidebar.tsx
│ │ └── Footer.tsx
│ ├── lib/ # Utilities
│ │ ├── api.ts # API client
│ │ ├── websocket.ts # WebSocket client
│ │ └── utils.ts
│ ├── stores/ # Zustand stores
│ │ ├── authStore.ts
│ │ ├── playerStore.ts
│ │ └── chatStore.ts
│ ├── hooks/ # Custom hooks
│ │ ├── useApi.ts
│ │ └── useWebSocket.ts
│ ├── types/ # TypeScript types
│ │ └── index.ts
│ ├── i18n/ # Translations
│ │ ├── en.json
│ │ ├── fr.json
│ │ └── i18n.ts
│ └── styles/
│ └── globals.css
├── package.json
├── tsconfig.json
├── vite.config.ts
└── tailwind.config.js
2.4.3 Technologies & Dépendances
{
"dependencies": {
"react": "^18.2.0",
"react-dom": "^18.2.0",
"react-router-dom": "^6.22.0",
"@tanstack/react-query": "^5.17.0",
"axios": "^1.6.7",
"zustand": "^4.5.0",
"react-hook-form": "^7.49.3",
"zod": "^3.25.76",
"@hookform/resolvers": "^3.3.4",
"i18next": "^25.5.2",
"react-i18next": "^15.7.3",
"lucide-react": "^0.321.0",
"tailwind-merge": "^2.2.1",
"clsx": "^2.1.0"
},
"devDependencies": {
"@vitejs/plugin-react": "^4.2.1",
"typescript": "^5.3.3",
"vite": "^7.1.5",
"tailwindcss": "^4.0.0",
"eslint": "^9.0.0",
"prettier": "^3.2.5",
"vitest": "^3.2.4",
"@playwright/test": "^1.41.2"
}
}
3. INFRASTRUCTURE ET DÉPLOIEMENT
3.1 Architecture de Déploiement
graph TB
subgraph "Production Environment"
subgraph "Edge Layer"
CF[CloudFlare CDN]
WAF[Web Application Firewall]
end
subgraph "Load Balancing"
NGINX[NGINX Load Balancer]
end
subgraph "Application Layer"
API1[API Server 1]
API2[API Server 2]
API3[API Server 3]
CHAT1[Chat Server 1]
CHAT2[Chat Server 2]
STREAM1[Stream Server 1]
STREAM2[Stream Server 2]
end
subgraph "Data Layer"
PG_PRIMARY[(PostgreSQL Primary)]
PG_REPLICA1[(PostgreSQL Replica 1)]
PG_REPLICA2[(PostgreSQL Replica 2)]
REDIS_CLUSTER[(Redis Cluster)]
end
subgraph "Storage Layer"
S3[S3 Object Storage]
CDN_ORIGIN[CDN Origin]
end
end
CF --> WAF
WAF --> NGINX
NGINX --> API1
NGINX --> API2
NGINX --> API3
NGINX --> CHAT1
NGINX --> CHAT2
NGINX --> STREAM1
NGINX --> STREAM2
API1 --> PG_PRIMARY
API2 --> PG_PRIMARY
API3 --> PG_PRIMARY
API1 --> REDIS_CLUSTER
CHAT1 --> REDIS_CLUSTER
STREAM1 --> REDIS_CLUSTER
PG_PRIMARY --> PG_REPLICA1
PG_PRIMARY --> PG_REPLICA2
STREAM1 --> S3
S3 --> CDN_ORIGIN
CDN_ORIGIN --> CF
3.2 Environnements
| Environnement | URL | Purpose | Data |
|---|---|---|---|
| Development | http://localhost:* |
Local dev | Fixtures |
| Staging | https://staging.veza.app |
Pre-prod testing | Anonymized prod |
| Production | https://veza.app |
Live users | Production |
3.3 Docker Compose (Development)
# docker-compose.yml
version: '3.8'
services:
postgres:
image: postgres:15-alpine
environment:
POSTGRES_DB: veza_db
POSTGRES_USER: veza
POSTGRES_PASSWORD: ${DB_PASSWORD}
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U veza"]
interval: 5s
timeout: 5s
retries: 5
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
command: redis-server --appendonly yes
healthcheck:
test: ["CMD", "redis-cli", "ping"]
interval: 5s
timeout: 3s
retries: 5
rabbitmq:
image: rabbitmq:3.12-management-alpine
ports:
- "5672:5672"
- "15672:15672"
environment:
RABBITMQ_DEFAULT_USER: veza
RABBITMQ_DEFAULT_PASS: ${RABBITMQ_PASSWORD}
volumes:
- rabbitmq_data:/var/lib/rabbitmq
healthcheck:
test: ["CMD", "rabbitmq-diagnostics", "ping"]
interval: 10s
timeout: 5s
retries: 5
backend-api:
build:
context: ./veza-backend-api
dockerfile: Dockerfile
ports:
- "8080:8080"
environment:
DATABASE_URL: postgres://veza:${DB_PASSWORD}@postgres:5432/veza_db
REDIS_URL: redis://redis:6379
RABBITMQ_URL: amqp://veza:${RABBITMQ_PASSWORD}@rabbitmq:5672/
JWT_SECRET: ${JWT_SECRET}
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
rabbitmq:
condition: service_healthy
chat-server:
build:
context: ./veza-chat-server
dockerfile: Dockerfile
ports:
- "8081:8081"
environment:
DATABASE_URL: postgres://veza:${DB_PASSWORD}@postgres:5432/veza_db
REDIS_URL: redis://redis:6379
RABBITMQ_URL: amqp://veza:${RABBITMQ_PASSWORD}@rabbitmq:5672/
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
stream-server:
build:
context: ./veza-stream-server
dockerfile: Dockerfile
ports:
- "8082:8082"
environment:
DATABASE_URL: postgres://veza:${DB_PASSWORD}@postgres:5432/veza_db
REDIS_URL: redis://redis:6379
S3_ENDPOINT: ${S3_ENDPOINT}
S3_BUCKET: ${S3_BUCKET}
depends_on:
postgres:
condition: service_healthy
redis:
condition: service_healthy
frontend:
build:
context: ./apps/web
dockerfile: Dockerfile
ports:
- "5176:80"
environment:
VITE_API_URL: http://backend-api:8080
VITE_WS_URL: ws://chat-server:8081
VITE_STREAM_URL: http://stream-server:8082
volumes:
postgres_data:
redis_data:
rabbitmq_data:
3.4 Ports et Protocoles
| Service | Port | Protocol | Access |
|---|---|---|---|
| Backend API | 8080 | HTTP/2, gRPC | Public |
| Chat Server | 8081 | WebSocket | Public |
| Stream Server | 8082 | HTTP/2, HLS | Public |
| PostgreSQL | 5432 | TCP | Internal |
| Redis | 6379 | TCP | Internal |
| RabbitMQ | 5672 | AMQP | Internal |
| RabbitMQ Management | 15672 | HTTP | Internal |
| Prometheus | 9090 | HTTP | Internal |
| Grafana | 3000 | HTTP | Internal |
| Traefik Dashboard | 8888 | HTTP | Internal |
4. FLUX DE DONNÉES
4.1 Flux d'Authentification
sequenceDiagram
participant User
participant Frontend
participant API
participant PostgreSQL
participant Redis
User->>Frontend: Enter credentials
Frontend->>API: POST /auth/login
API->>PostgreSQL: Validate credentials
PostgreSQL-->>API: User found
API->>API: Generate JWT
API->>Redis: Store session
API-->>Frontend: JWT + Refresh Token
Frontend->>Frontend: Store tokens (httpOnly cookie)
Frontend->>API: GET /users/me (with JWT)
API->>Redis: Validate JWT
Redis-->>API: Session valid
API-->>Frontend: User profile
4.2 Flux d'Upload de Track
sequenceDiagram
participant User
participant Frontend
participant API
participant StreamServer
participant S3
participant RabbitMQ
participant Worker
participant PostgreSQL
User->>Frontend: Select audio file
Frontend->>API: POST /tracks/upload (multipart)
API->>API: Validate file (type, size)
API->>S3: Upload raw file
S3-->>API: File URL
API->>PostgreSQL: Create track record (status: processing)
API->>RabbitMQ: Publish TrackUploaded event
API-->>Frontend: Track ID, status
RabbitMQ->>Worker: Consume TrackUploaded event
Worker->>S3: Download raw file
Worker->>StreamServer: POST /transcode
StreamServer->>StreamServer: Transcode to multiple formats
StreamServer->>StreamServer: Generate waveform
StreamServer->>S3: Upload processed files
StreamServer-->>Worker: Processing complete
Worker->>PostgreSQL: Update track (status: ready)
Worker->>RabbitMQ: Publish TrackProcessed event
RabbitMQ->>API: Consume TrackProcessed event
API->>Frontend: WebSocket notification
Frontend->>User: Track ready!
4.3 Flux de Chat Temps Réel
sequenceDiagram
participant UserA
participant FrontendA
participant ChatServer
participant Redis
participant PostgreSQL
participant RabbitMQ
participant FrontendB
participant UserB
FrontendA->>ChatServer: WebSocket connect
ChatServer->>Redis: Register connection
FrontendB->>ChatServer: WebSocket connect
ChatServer->>Redis: Register connection
UserA->>FrontendA: Type message
FrontendA->>ChatServer: Send message
ChatServer->>PostgreSQL: Store message
ChatServer->>Redis: Cache message
ChatServer->>RabbitMQ: Publish MessageSent event
ChatServer->>FrontendB: Deliver message (WebSocket)
FrontendB->>UserB: Display message
5. SÉCURITÉ
5.1 Authentication & Authorization
5.1.1 JWT Structure
{
"header": {
"alg": "RS256",
"typ": "JWT"
},
"payload": {
"sub": "user-123",
"email": "user@example.com",
"roles": ["user", "creator"],
"permissions": ["track:create", "track:read"],
"iat": 1730550000,
"exp": 1730553600,
"jti": "token-uuid"
}
}
Token Lifetime:
- Access Token: 15 minutes
- Refresh Token: 30 days
- Remember Me Token: 90 days
5.1.2 RBAC (Role-Based Access Control)
Roles Hierarchy:
admin (all permissions)
↓
moderator (moderation permissions)
↓
creator (content creation permissions)
↓
premium_user (premium features)
↓
user (basic permissions)
↓
guest (read-only)
Permissions Matrix:
| Resource | guest | user | creator | premium | moderator | admin |
|---|---|---|---|---|---|---|
| Track Read | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Track Create | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Track Delete Own | ❌ | ✅ | ✅ | ✅ | ✅ | ✅ |
| Track Delete Any | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ |
| User Ban | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ |
| System Config | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ |
5.2 Data Encryption
At Rest:
- Database: AES-256-GCM (PostgreSQL pgcrypto)
- Files: AES-256-CBC (S3 SSE-KMS)
- Backups: AES-256-GCM
In Transit:
- TLS 1.3 minimum
- Cipher suites: TLS_AES_256_GCM_SHA384, TLS_CHACHA20_POLY1305_SHA256
- Certificate: Let's Encrypt + automatic renewal
5.3 Security Headers
Strict-Transport-Security: max-age=31536000; includeSubDomains; preload
Content-Security-Policy: default-src 'self'; script-src 'self' 'unsafe-inline'; style-src 'self' 'unsafe-inline'
X-Frame-Options: DENY
X-Content-Type-Options: nosniff
X-XSS-Protection: 1; mode=block
Referrer-Policy: strict-origin-when-cross-origin
Permissions-Policy: geolocation=(), microphone=(), camera=()
5.4 Rate Limiting
| Endpoint | Rate Limit | Window | Burst |
|---|---|---|---|
/auth/login |
5 requests | 15 min | 2 |
/auth/register |
3 requests | 1 hour | 1 |
/api/v1/* (authenticated) |
1000 requests | 1 min | 100 |
/api/v1/* (guest) |
100 requests | 1 min | 20 |
| WebSocket connections | 10 connections | 1 min | 2 |
6. PERFORMANCE ET SCALABILITÉ
6.1 Performance Targets
| Metric | Target | Measurement |
|---|---|---|
| API Response Time (p95) | < 100ms | Prometheus |
| API Response Time (p99) | < 200ms | Prometheus |
| WebSocket Latency | < 50ms | Custom metrics |
| Database Query Time (p95) | < 10ms | pgStatements |
| Page Load Time (FCP) | < 1.5s | Lighthouse |
| Page Load Time (LCP) | < 2.5s | Lighthouse |
| Time to Interactive (TTI) | < 3.5s | Lighthouse |
| Audio Playback Start | < 500ms | Custom metrics |
6.2 Scalability Targets
| Resource | Target | Strategy |
|---|---|---|
| Concurrent Users | 100,000+ | Horizontal scaling |
| Audio Streams | 10,000+ | CDN + adaptive bitrate |
| WebSocket Connections | 50,000+ | Multi-instance + Redis pub/sub |
| Database Connections | 1,000+ | Connection pooling (pgBouncer) |
| Messages/sec | 100,000+ | Queue sharding |
| File Uploads/min | 1,000+ | Background workers |
6.3 Caching Strategy
Cache Layers:
-
Browser Cache (Service Worker):
- Static assets: 1 year
- API responses: 5 minutes
- Waveforms: 1 hour
-
CDN Cache (CloudFlare):
- Audio files: 7 days
- Images: 30 days
- Static JS/CSS: 1 year
-
Redis Cache:
- User sessions: 30 days
- User profiles: 1 hour
- Track metadata: 15 minutes
- Search results: 5 minutes
- Leaderboards: 1 minute
-
Application Cache (in-memory):
- Configuration: Until restart
- Feature flags: 1 minute
- JWT public keys: 1 hour
Cache Invalidation:
- Write-through: Update DB + cache simultaneously
- Cache-aside: Read from cache, fallback to DB
- Event-driven: Invalidate on domain events
6.4 Database Optimization
Indexes (top 10 critical):
-- Users
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_users_username ON users(username);
CREATE INDEX idx_users_created_at ON users(created_at DESC);
-- Tracks
CREATE INDEX idx_tracks_creator_id ON tracks(creator_id);
CREATE INDEX idx_tracks_genre ON tracks(genre);
CREATE INDEX idx_tracks_created_at ON tracks(created_at DESC);
-- Messages
CREATE INDEX idx_messages_room_id_created_at ON messages(room_id, created_at DESC);
CREATE INDEX idx_messages_sender_id ON messages(sender_id);
-- Search
CREATE INDEX idx_tracks_search ON tracks USING GIN(to_tsvector('english', title || ' ' || artist));
CREATE INDEX idx_users_search ON users USING GIN(to_tsvector('english', username || ' ' || display_name));
Partitioning:
-- Messages table partitioned by month
CREATE TABLE messages_2025_01 PARTITION OF messages
FOR VALUES FROM ('2025-01-01') TO ('2025-02-01');
-- Analytics events partitioned by day
CREATE TABLE analytics_events_2025_01_01 PARTITION OF analytics_events
FOR VALUES FROM ('2025-01-01') TO ('2025-01-02');
Connection Pooling (pgBouncer):
[databases]
veza_db = host=postgres port=5432 dbname=veza_db
[pgbouncer]
pool_mode = transaction
max_client_conn = 1000
default_pool_size = 25
reserve_pool_size = 5
reserve_pool_timeout = 3
7. OBSERVABILITÉ
7.1 Logging
Log Levels:
TRACE: Very detailed (disabled in production)DEBUG: Detailed debugging informationINFO: General informational messagesWARN: Warning messagesERROR: Error messagesFATAL: Critical errors (application crash)
Log Format (JSON):
{
"timestamp": "2025-11-02T10:30:00.123Z",
"level": "INFO",
"service": "backend-api",
"trace_id": "abc123",
"span_id": "def456",
"message": "User logged in successfully",
"user_id": "user-123",
"ip": "192.168.1.100",
"duration_ms": 45
}
Centralized Logging (Loki):
# promtail-config.yaml
server:
http_listen_port: 9080
clients:
- url: http://loki:3100/loki/api/v1/push
scrape_configs:
- job_name: docker
docker_sd_configs:
- host: unix:///var/run/docker.sock
relabel_configs:
- source_labels: ['__meta_docker_container_name']
target_label: 'container'
7.2 Metrics (Prometheus)
Application Metrics:
// Go (backend-api)
var (
httpRequestsTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "http_requests_total",
Help: "Total number of HTTP requests",
},
[]string{"method", "endpoint", "status"},
)
httpRequestDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "http_request_duration_seconds",
Help: "HTTP request duration in seconds",
Buckets: prometheus.DefBuckets,
},
[]string{"method", "endpoint"},
)
activeWebSocketConnections = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "websocket_connections_active",
Help: "Number of active WebSocket connections",
},
)
)
Infrastructure Metrics:
- CPU usage per service
- Memory usage per service
- Disk I/O
- Network I/O
- Database connections
- Cache hit/miss ratio
7.3 Tracing (Jaeger)
Distributed Tracing:
import "go.opentelemetry.io/otel"
func CreateUser(ctx context.Context, input CreateUserInput) error {
ctx, span := otel.Tracer("backend-api").Start(ctx, "CreateUser")
defer span.End()
span.SetAttributes(
attribute.String("user.username", input.Username),
attribute.String("user.email", input.Email),
)
// Business logic...
if err != nil {
span.RecordError(err)
span.SetStatus(codes.Error, err.Error())
return err
}
span.SetStatus(codes.Ok, "User created successfully")
return nil
}
7.4 Alerting
Alert Rules (Prometheus Alertmanager):
groups:
- name: veza_alerts
interval: 30s
rules:
# High error rate
- alert: HighErrorRate
expr: rate(http_requests_total{status=~"5.."}[5m]) > 0.05
for: 5m
labels:
severity: critical
annotations:
summary: "High error rate detected"
description: "Error rate is {{ $value | humanizePercentage }}"
# High response time
- alert: HighResponseTime
expr: histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m])) > 0.5
for: 5m
labels:
severity: warning
annotations:
summary: "High response time (p95)"
description: "Response time p95 is {{ $value }}s"
# Database connection pool exhausted
- alert: DatabasePoolExhausted
expr: pg_stat_database_numbackends / pg_settings_max_connections > 0.9
for: 2m
labels:
severity: critical
annotations:
summary: "Database connection pool nearly exhausted"
description: "Connection usage is {{ $value | humanizePercentage }}"
# Service down
- alert: ServiceDown
expr: up{job="backend-api"} == 0
for: 1m
labels:
severity: critical
annotations:
summary: "Backend API is down"
description: "Backend API has been down for more than 1 minute"
8. DÉCISIONS ARCHITECTURALES (ADR)
ADR-001: Choix de Go pour le Backend API
Date: 2025-01-01
Statut: Accepted
Contexte: Besoin d'un langage performant, typé, avec bonne concurrence pour API REST haute charge.
Décision: Utiliser Go 1.23+ avec framework Gin.
Conséquences:
- ✅ Compilation rapide, binaire unique
- ✅ Goroutines pour concurrence
- ✅ Typage fort, pas de runtime errors
- ✅ Excellent pour microservices
- ❌ Verbosité du code (error handling)
- ❌ Écosystème moins riche que Node.js
Alternatives rejetées:
- Node.js: Single-threaded, performance inférieure
- Python: GIL, performance médiocre pour API haute charge
- Java: Trop lourd, démarrage lent, complexité
ADR-002: Choix de Rust pour Services Temps Réel
Date: 2025-01-01
Statut: Accepted
Contexte: WebSocket haute performance, sécurité mémoire critique.
Décision: Utiliser Rust 1.75+ avec Axum + Tokio.
Conséquences:
- ✅ Zero-cost abstractions
- ✅ Sécurité mémoire garantie
- ✅ Performance native (C/C++ level)
- ✅ Concurrence safe (ownership model)
- ❌ Courbe d'apprentissage raide
- ❌ Temps de compilation long
Alternatives rejetées:
- Go: Garbage collection (latence imprévisible)
- C++: Pas de sécurité mémoire, complexité
- Elixir: Performance inférieure pour streaming audio
ADR-003: Choix de PostgreSQL comme Base Principale
Date: 2025-01-01
Statut: Accepted
Contexte: Besoin ACID, relations complexes, performance.
Décision: PostgreSQL 15+ comme base principale.
Conséquences:
- ✅ ACID complet
- ✅ Relations complexes (foreign keys, joins)
- ✅ Full-text search intégré
- ✅ JSON/JSONB pour flexibilité
- ✅ Extensions (pgcrypto, pg_trgm, etc.)
- ❌ Scaling horizontal complexe
Alternatives rejetées:
- MySQL: Moins de fonctionnalités avancées
- MongoDB: Pas ACID, relations difficiles
- CockroachDB: Trop jeune, écosystème limité
ADR-004: Architecture Microservices Modulaire
Date: 2025-01-01
Statut: Accepted
Contexte: 600+ features, équipe multiple, scalabilité.
Décision: Architecture microservices avec 3 services principaux (API, Chat, Stream).
Conséquences:
- ✅ Scalabilité indépendante
- ✅ Technologies différentes par service
- ✅ Isolation des pannes
- ✅ Déploiements indépendants
- ❌ Complexité opérationnelle
- ❌ Transactions distribuées complexes
Alternatives rejetées:
- Monolithe: Pas scalable, déploiements risqués
- Serverless: Vendor lock-in, cold starts
- Microservices complets (20+ services): Trop complexe au démarrage
ADR-005: gRPC pour Communication Inter-Services
Date: 2025-01-01
Statut: Accepted
Contexte: Communication rapide et typée entre services.
Décision: gRPC avec Protocol Buffers pour inter-service, REST pour clients.
Conséquences:
- ✅ Typage fort avec protobuf
- ✅ Performance (binaire, HTTP/2)
- ✅ Génération de code
- ✅ Streaming bidirectionnel
- ❌ Debugging plus difficile
- ❌ Moins universel que REST
Alternatives rejetées:
- REST inter-services: Moins performant, pas typé
- GraphQL: Trop complexe pour inter-service
- Message Queue pure: Latence, complexité
ADR-006: Redis pour Cache et Sessions
Date: 2025-01-01
Statut: Accepted
Contexte: Besoin cache in-memory ultra-rapide + pub/sub.
Décision: Redis 7+ Cluster.
Conséquences:
- ✅ Performance exceptionnelle (<1ms)
- ✅ Pub/sub intégré
- ✅ Structures de données riches
- ✅ Cluster mode (scaling horizontal)
- ❌ Volatilité (RAM)
- ❌ Coût (RAM expensive)
Alternatives rejetées:
- Memcached: Moins de fonctionnalités
- In-memory applicatif: Pas partagé entre instances
- Hazelcast: Trop complexe, Java-centric
ADR-007: RabbitMQ pour Message Queue
Date: 2025-01-01
Statut: Accepted
Contexte: Événements asynchrones, découplage services.
Décision: RabbitMQ 3.12+ avec AMQP.
Conséquences:
- ✅ Mature, stable
- ✅ Routing flexible (exchanges, queues)
- ✅ Garanties de livraison
- ✅ Management UI
- ❌ Throughput inférieur à Kafka
- ❌ Persistence moins optimale que Kafka
Alternatives rejetées:
- Kafka: Over-engineering pour début, complexité
- AWS SQS: Vendor lock-in
- NATS: Moins mature pour persistence
ADR-008: React avec TypeScript pour Frontend
Date: 2025-01-01
Statut: Accepted
Contexte: UI complexe, typage strict, écosystème riche.
Décision: React 18+ avec TypeScript 5.3+ strict.
Conséquences:
- ✅ Écosystème immense
- ✅ Typage strict (moins d'erreurs runtime)
- ✅ Performance (Concurrent Mode)
- ✅ Communauté énorme
- ❌ Bundle size important
- ❌ Complexité state management
Alternatives rejetées:
- Vue.js: Écosystème plus petit
- Svelte: Moins mature, écosystème limité
- Angular: Trop lourd, opinionated
ADR-009: Vite comme Build Tool Frontend
Date: 2025-01-01
Statut: Accepted
Contexte: Build rapide, HMR performant.
Décision: Vite 7+ au lieu de Webpack.
Conséquences:
- ✅ Build ultra-rapide (ESBuild)
- ✅ HMR instantané
- ✅ Configuration simple
- ✅ Support natif TypeScript
- ❌ Écosystème moins mature que Webpack
Alternatives rejetées:
- Webpack: Lent, configuration complexe
- Parcel: Moins performant que Vite
- Rollup: Moins de fonctionnalités DX
ADR-010: Docker pour Conteneurisation
Date: 2025-01-01
Statut: Accepted
Contexte: Déploiement consistant multi-environnements.
Décision: Docker 24+ avec multi-stage builds.
Conséquences:
- ✅ Portabilité totale
- ✅ Isolation
- ✅ Écosystème mature
- ✅ CI/CD intégré
- ❌ Overhead léger (performance)
- ❌ Sécurité (root privileges)
Alternatives rejetées:
- VMs: Trop lourd, lent
- Bare metal: Pas portable
- Podman: Moins mature
9. CONVENTIONS DE NOMMAGE
9.1 Base de Données
Tables: snake_case, pluriel
users
tracks
playlists
playlist_tracks
Colonnes: snake_case
user_id
created_at
updated_at
first_name
Indexes: idx_{table}_{column(s)}
idx_users_email
idx_tracks_creator_id_created_at
Foreign Keys: fk_{source_table}_{target_table}
fk_playlist_tracks_playlists
fk_playlist_tracks_tracks
9.2 Backend Go
Packages: lowercase, singular
domain
application
infrastructure
Structs: PascalCase
type User struct { }
type CreateUserCommand struct { }
Functions/Methods: PascalCase (public), camelCase (private)
func CreateUser() { } // Public
func validateEmail() { } // Private
Variables: camelCase
var userRepository UserRepository
var maxConnections int
Constants: PascalCase ou SCREAMING_SNAKE_CASE
const MaxRetries = 3
const DEFAULT_TIMEOUT = 30 * time.Second
Interfaces: PascalCase, suffix -er si applicable
type UserRepository interface { }
type Logger interface { }
type Validator interface { }
9.3 Rust
Modules: snake_case
mod websocket;
mod message_store;
Structs/Enums: PascalCase
struct Message { }
enum MessageType { }
Functions: snake_case
fn send_message() { }
fn validate_token() { }
Constants: SCREAMING_SNAKE_CASE
const MAX_MESSAGE_SIZE: usize = 1024;
Traits: PascalCase
trait MessageStore { }
trait Authenticator { }
9.4 TypeScript/React
Files: PascalCase (components), camelCase (utilities)
LoginForm.tsx
Button.tsx
utils.ts
api.ts
Components: PascalCase
function LoginForm() { }
const Button: React.FC = () => { }
Functions: camelCase
function fetchUsers() { }
const handleSubmit = () => { }
Types/Interfaces: PascalCase
interface User { }
type CreateUserInput = { }
Enums: PascalCase
enum UserRole { }
Constants: SCREAMING_SNAKE_CASE
const API_BASE_URL = "https://api.veza.app";
9.5 API REST
Endpoints: kebab-case, pluriel pour ressources
GET /api/v1/users
POST /api/v1/users
GET /api/v1/users/{id}
PUT /api/v1/users/{id}
DELETE /api/v1/users/{id}
POST /api/v1/users/{id}/avatar
GET /api/v1/users/{id}/playlists
Query Params: snake_case
GET /api/v1/tracks?genre=rock&sort_by=created_at&order=desc
JSON Fields: camelCase
{
"userId": "123",
"firstName": "John",
"createdAt": "2025-11-02T10:30:00Z"
}
9.6 Events
Event Names: {domain}.{entity}.{action}.{version}
auth.user.registered.v1
marketplace.order.paid.v1
chat.message.sent.v1
Event Fields: camelCase (JSON)
{
"eventId": "evt-123",
"eventType": "auth.user.registered.v1",
"aggregateId": "user-123"
}
10. STRUCTURE DES RÉPERTOIRES
veza-full-stack/
├── .github/ # GitHub Actions CI/CD
│ └── workflows/
│ ├── backend-ci.yml
│ ├── chat-ci.yml
│ ├── stream-ci.yml
│ └── frontend-ci.yml
├── ansible/ # Ansible deployment
│ ├── inventory/
│ ├── playbooks/
│ └── roles/
├── apps/ # Applications frontend
│ ├── web/ # React web app
│ ├── mobile/ # React Native mobile
│ └── desktop/ # Electron desktop
├── config/ # Configurations centralisées
│ ├── docker/
│ ├── prometheus/
│ ├── grafana/
│ └── nginx/
├── docs/ # Documentation
│ ├── ORIGIN/ # ⭐ Documents ORIGIN (immuables)
│ │ ├── ORIGIN_MASTER_ARCHITECTURE.md
│ │ ├── ORIGIN_DEVELOPMENT_PHASES.md
│ │ ├── ORIGIN_FEATURES_REGISTRY.md
│ │ └── ... (15 documents)
│ ├── architecture/
│ ├── api/
│ └── guides/
├── features/ # Feature flags & contracts
│ └── core-contracts/
├── fixtures/ # Test data & fixtures
│ ├── scenarios/
│ └── services/
├── scripts/ # Utility scripts
│ ├── start-veza.sh
│ ├── stop-veza.sh
│ └── test-veza.sh
├── veza-backend-api/ # ⭐ Backend API (Go)
│ ├── cmd/
│ │ └── api/
│ │ └── main.go
│ ├── internal/
│ │ ├── domain/
│ │ ├── application/
│ │ ├── infrastructure/
│ │ └── interfaces/
│ ├── pkg/
│ ├── migrations/
│ ├── go.mod
│ └── Dockerfile
├── veza-chat-server/ # ⭐ Chat Server (Rust)
│ ├── src/
│ │ ├── main.rs
│ │ ├── lib.rs
│ │ ├── websocket/
│ │ ├── rooms/
│ │ └── messages/
│ ├── migrations/
│ ├── Cargo.toml
│ └── Dockerfile
├── veza-stream-server/ # ⭐ Stream Server (Rust)
│ ├── src/
│ │ ├── main.rs
│ │ ├── audio/
│ │ └── streaming/
│ ├── Cargo.toml
│ └── Dockerfile
├── veza-rust-common/ # Shared Rust code
│ ├── src/
│ │ ├── auth.rs
│ │ ├── models.rs
│ │ └── utils.rs
│ └── Cargo.toml
├── docker-compose.yml # Development environment
├── docker-compose.production.yml # Production environment
├── Makefile # Build automation
├── .env.example # Environment variables template
└── README.md # Project README
✅ CHECKLIST DE VALIDATION
Architecture
- Tous les services sont définis avec ports et protocoles
- Les flux de données sont documentés avec diagrammes
- Les bounded contexts DDD sont clairement délimités
- Les patterns architecturaux sont explicites (Clean, CQRS, Event-Driven)
- Les décisions architecturales (ADR) sont documentées avec justifications
Sécurité
- Authentification JWT avec refresh tokens
- Autorisation RBAC avec matrice de permissions
- Chiffrement at-rest et in-transit défini
- Rate limiting configuré par endpoint
- Security headers complets
Performance
- Targets de performance définies (latence, throughput)
- Stratégie de caching multi-niveaux
- Optimisations base de données (indexes, partitioning)
- Connection pooling configuré
- CDN pour assets statiques et audio
Observabilité
- Logging structuré (JSON) centralisé (Loki)
- Métriques Prometheus pour tous les services
- Distributed tracing (Jaeger/OpenTelemetry)
- Alerting rules définies (Alertmanager)
- Dashboards Grafana
Infrastructure
- Docker Compose pour développement
- Kubernetes ready (futurs déploiements)
- CI/CD pipelines GitHub Actions
- Déploiement Ansible automatisé
- Environnements (dev, staging, prod) définis
📊 MÉTRIQUES DE SUCCÈS
Technique
- Code Quality: Coverage > 80%, SonarQube Quality Gate A
- Performance API: p95 < 100ms, p99 < 200ms
- Performance Frontend: Lighthouse Score > 90
- Uptime: > 99.9% (SLA)
- Security: Zero vulnerabilities critiques (Snyk/Dependabot)
Business
- Concurrent Users: 100,000+ supportés
- Audio Streams: 10,000+ simultanés
- WebSocket Connections: 50,000+ simultanées
- Messages/sec: 100,000+
- Database Queries: p95 < 10ms
DevOps
- Deploy Time: < 10 minutes (zero-downtime)
- Rollback Time: < 5 minutes
- Build Time: < 5 minutes (CI)
- MTTR (Mean Time To Recovery): < 15 minutes
- Change Failure Rate: < 5%
🔄 HISTORIQUE DES VERSIONS
| Version | Date | Changements |
|---|---|---|
| 1.0.0 | 2025-11-02 | Version initiale - Architecture complète définitive |
⚠️ AVERTISSEMENT
CE DOCUMENT EST IMMUABLE
Toute modification de ce document doit suivre le processus de Change Management formel:
- Proposition: Créer une RFC (Request For Comments) avec justification détaillée
- Review: Review par l'équipe technique (Lead Backend, Lead Frontend, DevOps, CTO)
- Approval: Approbation unanime requise
- Documentation: Mettre à jour ADR avec nouvelle décision
- Communication: Communiquer à toute l'équipe
- Implémentation: Planifier la migration si nécessaire
Raisons acceptables de modification:
- Vulnérabilité de sécurité critique découverte
- Technologie devenue obsolète (end-of-life)
- Performance dégradée non récupérable
- Changement réglementaire (GDPR, etc.)
Raisons NON acceptables:
- "On préfère technologie X"
- "C'est plus à la mode"
- "J'ai vu sur Hacker News..."
- "Mon ancien projet utilisait Y"
Document créé par: Architecture Team
Date de création: 2025-11-02
Prochaine révision: 2026-11-02 (ou sur trigger critique)
Propriétaire: CTO / Lead Architect
Statut: ✅ APPROUVÉ ET VERROUILLÉ