# ORIGIN_TESTING_STRATEGY.md ## 📋 RÉSUMÉ EXÉCUTIF Ce document dĂ©finit la stratĂ©gie de testing complĂšte et dĂ©finitive pour la plateforme Veza. Il couvre tous les types de tests (unit, integration, E2E, performance, security, load) avec couverture minimale 80%, outils standardisĂ©s, processus CI/CD automatisĂ©, et mĂ©thodologies (TDD, BDD). Cette stratĂ©gie garantit qualitĂ© production, rĂ©duction bugs, et confiance dĂ©ploiement continu sur 24 mois. ## 🎯 OBJECTIFS ### Objectif Principal Établir une culture de testing exhaustive avec couverture ≄ 80%, automatisation complĂšte CI/CD, et dĂ©tection prĂ©coce de rĂ©gressions pour assurer dĂ©ploiements production sans risque et maintenance long-terme facilitĂ©e. ### Objectifs Secondaires - RĂ©duire taux de bugs en production (< 0.1% des releases) - AccĂ©lĂ©rer feedback loop (tests < 10 min en CI/CD) - Faciliter refactoring avec confiance (tests comme safety net) - Documenter comportement attendu via tests - DĂ©tecter problĂšmes performance avant production ## 📖 TABLE DES MATIÈRES 1. [Testing Philosophy](#1-testing-philosophy) 2. [Test Types & Coverage](#2-test-types--coverage) 3. [Unit Testing](#3-unit-testing) 4. [Integration Testing](#4-integration-testing) 5. [End-to-End Testing](#5-end-to-end-testing) 6. [Performance Testing](#6-performance-testing) 7. [Security Testing](#7-security-testing) 8. [Load & Stress Testing](#8-load--stress-testing) 9. [Test Data Management](#9-test-data-management) 10. [CI/CD Pipeline Testing](#10-cicd-pipeline-testing) 11. [Test Automation](#11-test-automation) 12. [Quality Gates](#12-quality-gates) ## 🔒 RÈGLES IMMUABLES 1. **Coverage Minimum**: 80% line coverage pour nouveau code - CI/CD bloque si < 80% 2. **Test Before Merge**: Aucun PR mergĂ© sans tests - aucune exception 3. **Test Isolation**: Chaque test indĂ©pendant (no shared state, no order dependency) 4. **Fast Feedback**: Tests unitaires < 2 min, integration < 5 min, E2E < 10 min 5. **Deterministic**: Tests 100% reproductibles (no flaky tests tolĂ©rĂ©s) 6. **Test Data**: Fixtures isolĂ©es, cleanup automatique (no test pollution) 7. **Mocking**: Dependencies externes mockĂ©es (DB, APIs, time) 8. **Regression**: Bug fix = nouveau test (prevent recurrence) 9. **Documentation**: Tests servent de documentation (readable, self-explanatory) 10. **Performance**: Tests de performance en CI/CD (regression detection) ## 1. TESTING PHILOSOPHY ### 1.1 Testing Pyramid ``` ╱â•Č ╱ â•Č ╱ E2E â•Č 10% - Slow, Brittle, High Cost ╱────────â•Č ╱ â•Č ╱ Integration â•Č 20% - Medium Speed, Medium Cost ╱──────────────â•Č ╱ â•Č ╱ Unit Tests â•Č 70% - Fast, Reliable, Low Cost ╱────────────────────â•Č ``` **Distribution**: - **Unit Tests (70%)**: Fast, isolated, test single functions/methods - **Integration Tests (20%)**: Medium speed, test component interactions - **E2E Tests (10%)**: Slow, test complete user flows ### 1.2 Testing Principles **F.I.R.S.T Principles**: - **Fast**: Tests execute quickly (unit < 100ms, all tests < 10min) - **Independent**: No test depends on another (can run in any order) - **Repeatable**: Same result every time (deterministic) - **Self-Validating**: Pass/fail clear (no manual inspection) - **Timely**: Written with code (not after, preferably before - TDD) **Test Qualities**: - **Readable**: Anyone can understand what's being tested - **Maintainable**: Easy to update when requirements change - **Trustworthy**: No false positives/negatives (no flaky tests) - **Comprehensive**: Cover happy path + edge cases + error cases ### 1.3 TDD (Test-Driven Development) **Red-Green-Refactor Cycle**: ``` 1. 🔮 RED: Write failing test first 2. 🟱 GREEN: Write minimum code to pass 3. đŸ”” REFACTOR: Clean up code while tests pass 4. Repeat ``` **Example Flow**: ```go // 1. RED - Write failing test func TestCreateUser_Success(t *testing.T) { service := NewUserService() user := &User{Email: "test@example.com"} err := service.CreateUser(user) assert.NoError(t, err) assert.NotEmpty(t, user.ID) } // Test fails - CreateUser doesn't exist yet // 2. GREEN - Implement minimum code func (s *UserService) CreateUser(user *User) error { user.ID = uuid.New().String() return nil } // Test passes // 3. REFACTOR - Improve code func (s *UserService) CreateUser(user *User) error { if err := validateEmail(user.Email); err != nil { return err } user.ID = uuid.New().String() return s.repo.Save(user) } // Tests still pass ``` ## 2. TEST TYPES & COVERAGE ### 2.1 Coverage Requirements | Test Type | Coverage Target | Execution Time | Frequency | |-----------|----------------|----------------|-----------| | **Unit Tests** | ≄ 80% line coverage | < 2 min | Every commit | | **Integration Tests** | ≄ 70% API endpoints | < 5 min | Every commit | | **E2E Tests** | ≄ 50% critical flows | < 10 min | Every commit (smoke), Full nightly | | **Performance Tests** | 100% critical endpoints | < 15 min | Nightly + Pre-release | | **Security Tests** | 100% OWASP Top 10 | < 20 min | Weekly + Pre-release | | **Load Tests** | 100% production scenarios | 30-60 min | Weekly + Pre-release | ### 2.2 Coverage Metrics **Tracked Metrics**: - **Line Coverage**: % of lines executed during tests - **Branch Coverage**: % of conditional branches tested - **Function Coverage**: % of functions called - **Statement Coverage**: % of statements executed **Tools**: - **Go**: `go test -cover`, `gocov` - **Rust**: `cargo tarpaulin` - **TypeScript**: `vitest --coverage` (c8) **Example Coverage Report**: ```bash # Go go test ./... -coverprofile=coverage.out go tool cover -html=coverage.out -o coverage.html # Coverage output: # internal/services/user_service.go: CreateUser 85.7% # internal/services/track_service.go: UploadTrack 92.3% # internal/handlers/auth_handlers.go: Login 78.5% ⚠ Below 80% ``` ## 3. UNIT TESTING ### 3.1 Unit Test Structure (AAA Pattern) **Arrange-Act-Assert**: ```go func TestUserService_CreateUser_Success(t *testing.T) { // ARRANGE - Setup test data and dependencies mockRepo := &MockUserRepository{} mockRepo.CreateFunc = func(user *User) error { return nil } service := NewUserService(mockRepo) user := &User{ Email: "test@example.com", Username: "testuser", } // ACT - Execute the function under test err := service.CreateUser(user) // ASSERT - Verify the outcome assert.NoError(t, err) assert.NotEmpty(t, user.ID) assert.NotEmpty(t, user.CreatedAt) } ``` ### 3.2 Test Naming Convention **Format**: `Test__` **Examples**: ```go // ✅ GOOD - Descriptive test names func TestCreateUser_ValidData_Success(t *testing.T) { } func TestCreateUser_DuplicateEmail_ReturnsConflictError(t *testing.T) { } func TestCreateUser_InvalidEmail_ReturnsValidationError(t *testing.T) { } func TestGetUserByID_ExistingUser_ReturnsUser(t *testing.T) { } func TestGetUserByID_NonExistentUser_ReturnsNotFoundError(t *testing.T) { } // ❌ BAD - Unclear test names func TestCreateUser(t *testing.T) { } func TestUser(t *testing.T) { } func TestSuccess(t *testing.T) { } ``` ### 3.3 Table-Driven Tests ```go func TestValidateEmail(t *testing.T) { tests := []struct { name string email string wantErr bool errMsg string }{ { name: "valid email", email: "user@example.com", wantErr: false, }, { name: "valid email with subdomain", email: "user@mail.example.com", wantErr: false, }, { name: "missing @ symbol", email: "userexample.com", wantErr: true, errMsg: "invalid email format", }, { name: "missing domain", email: "user@", wantErr: true, errMsg: "invalid email format", }, { name: "empty string", email: "", wantErr: true, errMsg: "email is required", }, } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { err := validateEmail(tt.email) if tt.wantErr { assert.Error(t, err) if tt.errMsg != "" { assert.Contains(t, err.Error(), tt.errMsg) } } else { assert.NoError(t, err) } }) } } ``` ### 3.4 Mocking (Go) **Interface-Based Mocking**: ```go // Production interface type UserRepository interface { Create(user *User) error GetByID(id string) (*User, error) GetByEmail(email string) (*User, error) Update(user *User) error Delete(id string) error } // Mock implementation type MockUserRepository struct { CreateFunc func(user *User) error GetByIDFunc func(id string) (*User, error) GetByEmailFunc func(email string) (*User, error) UpdateFunc func(user *User) error DeleteFunc func(id string) error } func (m *MockUserRepository) Create(user *User) error { if m.CreateFunc != nil { return m.CreateFunc(user) } return nil } func (m *MockUserRepository) GetByID(id string) (*User, error) { if m.GetByIDFunc != nil { return m.GetByIDFunc(id) } return nil, errors.New("not implemented") } // Test usage func TestUserService_CreateUser(t *testing.T) { mockRepo := &MockUserRepository{ CreateFunc: func(user *User) error { user.ID = "mock-id-123" return nil }, GetByEmailFunc: func(email string) (*User, error) { return nil, gorm.ErrRecordNotFound // Simulate no existing user }, } service := NewUserService(mockRepo) user := &User{Email: "test@example.com"} err := service.CreateUser(user) assert.NoError(t, err) assert.Equal(t, "mock-id-123", user.ID) } ``` ### 3.5 Testing Async Code (TypeScript) ```typescript import { describe, it, expect, vi } from 'vitest'; describe('UserService', () => { it('should create user successfully', async () => { // ARRANGE const mockRepo = { create: vi.fn().mockResolvedValue({ id: 'user-123' }), getByEmail: vi.fn().mockResolvedValue(null), }; const service = new UserService(mockRepo); const user = { email: 'test@example.com', username: 'testuser' }; // ACT const result = await service.createUser(user); // ASSERT expect(result).toHaveProperty('id'); expect(mockRepo.create).toHaveBeenCalledWith( expect.objectContaining({ email: 'test@example.com' }) ); }); it('should handle errors gracefully', async () => { // ARRANGE const mockRepo = { create: vi.fn().mockRejectedValue(new Error('Database error')), getByEmail: vi.fn().mockResolvedValue(null), }; const service = new UserService(mockRepo); // ACT & ASSERT await expect(service.createUser({ email: 'test@example.com' })) .rejects .toThrow('Database error'); }); }); ``` ## 4. INTEGRATION TESTING ### 4.1 API Integration Tests (Go) **Test with Real Database (Testcontainers)**: ```go import ( "testing" "github.com/testcontainers/testcontainers-go" "github.com/testcontainers/testcontainers-go/wait" ) func setupTestDB(t *testing.T) *gorm.DB { ctx := context.Background() // Start PostgreSQL container req := testcontainers.ContainerRequest{ Image: "postgres:15-alpine", ExposedPorts: []string{"5432/tcp"}, Env: map[string]string{ "POSTGRES_DB": "test_db", "POSTGRES_USER": "test", "POSTGRES_PASSWORD": "test", }, WaitingFor: wait.ForLog("database system is ready to accept connections"), } container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{ ContainerRequest: req, Started: true, }) require.NoError(t, err) t.Cleanup(func() { container.Terminate(ctx) }) // Get connection string host, _ := container.Host(ctx) port, _ := container.MappedPort(ctx, "5432") dsn := fmt.Sprintf("host=%s port=%s user=test password=test dbname=test_db sslmode=disable", host, port.Port()) // Connect and migrate db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{}) require.NoError(t, err) db.AutoMigrate(&User{}, &Track{}, &Playlist{}) return db } func TestUserAPI_CreateUser_Integration(t *testing.T) { // ARRANGE db := setupTestDB(t) router := setupRouter(db) payload := map[string]interface{}{ "email": "test@example.com", "username": "testuser", "password": "SecurePass123!", } body, _ := json.Marshal(payload) req := httptest.NewRequest("POST", "/api/v1/users", bytes.NewReader(body)) req.Header.Set("Content-Type", "application/json") w := httptest.NewRecorder() // ACT router.ServeHTTP(w, req) // ASSERT assert.Equal(t, 201, w.Code) var response map[string]interface{} json.Unmarshal(w.Body.Bytes(), &response) assert.Contains(t, response, "data") userData := response["data"].(map[string]interface{}) assert.Equal(t, "test@example.com", userData["email"]) assert.NotEmpty(t, userData["id"]) // Verify in database var user User err := db.First(&user, "email = ?", "test@example.com").Error assert.NoError(t, err) assert.Equal(t, "testuser", user.Username) } ``` ### 4.2 Database Integration Tests (Rust) ```rust use sqlx::PgPool; #[sqlx::test] async fn test_create_user(pool: PgPool) -> sqlx::Result<()> { // ARRANGE let user = User { id: Uuid::new_v4(), email: "test@example.com".to_string(), username: "testuser".to_string(), password_hash: "hashed_password".to_string(), }; // ACT let result = sqlx::query!( "INSERT INTO users (id, email, username, password_hash) VALUES ($1, $2, $3, $4)", user.id, user.email, user.username, user.password_hash ) .execute(&pool) .await?; // ASSERT assert_eq!(result.rows_affected(), 1); // Verify user exists let fetched_user = sqlx::query_as!( User, "SELECT id, email, username, password_hash, created_at FROM users WHERE id = $1", user.id ) .fetch_one(&pool) .await?; assert_eq!(fetched_user.email, "test@example.com"); assert_eq!(fetched_user.username, "testuser"); Ok(()) } ``` ### 4.3 API Contract Testing **OpenAPI Schema Validation**: ```typescript import { describe, it, expect } from 'vitest'; import { validateAgainstSchema } from './openapi-validator'; describe('API Contract Tests', () => { it('POST /users response matches OpenAPI schema', async () => { const response = await fetch('http://localhost:8080/api/v1/users', { method: 'POST', body: JSON.stringify({ email: 'test@example.com', username: 'testuser', password: 'SecurePass123!', }), headers: { 'Content-Type': 'application/json' }, }); const data = await response.json(); // Validate against OpenAPI schema const validation = validateAgainstSchema( 'POST', '/users', 201, data ); expect(validation.valid).toBe(true); expect(validation.errors).toHaveLength(0); }); }); ``` ## 5. END-TO-END TESTING ### 5.1 E2E Testing with Playwright **Setup**: ```typescript // playwright.config.ts import { defineConfig } from '@playwright/test'; export default defineConfig({ testDir: './tests/e2e', timeout: 30000, retries: 2, use: { baseURL: 'http://localhost:3000', screenshot: 'only-on-failure', video: 'retain-on-failure', trace: 'on-first-retry', }, projects: [ { name: 'chromium', use: { browserName: 'chromium' } }, { name: 'firefox', use: { browserName: 'firefox' } }, { name: 'webkit', use: { browserName: 'webkit' } }, ], webServer: { command: 'npm run dev', port: 3000, reuseExistingServer: !process.env.CI, }, }); ``` **E2E Test Example**: ```typescript import { test, expect } from '@playwright/test'; test.describe('User Registration Flow', () => { test('should register new user successfully', async ({ page }) => { // Navigate to registration page await page.goto('/register'); // Fill registration form await page.fill('input[name="email"]', 'newuser@example.com'); await page.fill('input[name="username"]', 'newuser'); await page.fill('input[name="password"]', 'SecurePass123!'); await page.fill('input[name="confirmPassword"]', 'SecurePass123!'); // Submit form await page.click('button[type="submit"]'); // Wait for redirect to dashboard await page.waitForURL('/dashboard'); // Verify user is logged in await expect(page.locator('text=Welcome, newuser')).toBeVisible(); // Verify email verification notice await expect(page.locator('text=Please verify your email')).toBeVisible(); }); test('should show validation errors for invalid data', async ({ page }) => { await page.goto('/register'); // Submit empty form await page.click('button[type="submit"]'); // Verify error messages await expect(page.locator('text=Email is required')).toBeVisible(); await expect(page.locator('text=Username is required')).toBeVisible(); await expect(page.locator('text=Password is required')).toBeVisible(); }); }); test.describe('Track Upload Flow', () => { test('should upload track successfully', async ({ page }) => { // Login first await page.goto('/login'); await page.fill('input[name="email"]', 'creator@example.com'); await page.fill('input[name="password"]', 'password123'); await page.click('button[type="submit"]'); await page.waitForURL('/dashboard'); // Navigate to upload page await page.goto('/upload'); // Upload file await page.setInputFiles('input[type="file"]', './fixtures/test-track.mp3'); // Fill track details await page.fill('input[name="title"]', 'Test Track'); await page.fill('textarea[name="description"]', 'This is a test track'); await page.selectOption('select[name="genre"]', 'electronic'); // Submit await page.click('button[type="submit"]'); // Wait for processing message await expect(page.locator('text=Processing your track')).toBeVisible(); // Verify redirect to track page await page.waitForURL(/\/tracks\/[a-z0-9-]+/, { timeout: 60000 }); // Verify track details await expect(page.locator('h1', { hasText: 'Test Track' })).toBeVisible(); }); }); ``` ### 5.2 Visual Regression Testing ```typescript import { test, expect } from '@playwright/test'; test('homepage visual regression', async ({ page }) => { await page.goto('/'); // Take screenshot and compare with baseline await expect(page).toHaveScreenshot('homepage.png', { fullPage: true, maxDiffPixels: 100, // Allow small differences }); }); test('user profile visual regression', async ({ page }) => { await page.goto('/users/johndoe'); // Hide dynamic elements (timestamps, play counts) await page.locator('.last-activity').evaluate(el => el.style.visibility = 'hidden'); await page.locator('.play-count').evaluate(el => el.style.visibility = 'hidden'); await expect(page).toHaveScreenshot('user-profile.png'); }); ``` ## 6. PERFORMANCE TESTING ### 6.1 Performance Tests with k6 **Load Testing Script**: ```javascript // k6-load-test.js import http from 'k6/http'; import { check, sleep } from 'k6'; export const options = { stages: [ { duration: '2m', target: 100 }, // Ramp up to 100 users { duration: '5m', target: 100 }, // Stay at 100 users { duration: '2m', target: 200 }, // Ramp up to 200 users { duration: '5m', target: 200 }, // Stay at 200 users { duration: '2m', target: 0 }, // Ramp down to 0 ], thresholds: { http_req_duration: ['p(95)<100'], // 95% of requests < 100ms http_req_failed: ['rate<0.01'], // < 1% failure rate }, }; export default function () { // Test GET /tracks const tracksRes = http.get('https://api.veza.app/v1/tracks'); check(tracksRes, { 'status is 200': (r) => r.status === 200, 'response time < 100ms': (r) => r.timings.duration < 100, }); sleep(1); // Test GET /tracks/{id} const trackRes = http.get('https://api.veza.app/v1/tracks/track-123'); check(trackRes, { 'status is 200': (r) => r.status === 200, 'response time < 50ms': (r) => r.timings.duration < 50, }); sleep(1); } ``` **Run k6 Tests**: ```bash # Local test k6 run k6-load-test.js # Cloud test (k6 Cloud) k6 cloud k6-load-test.js # Output to InfluxDB for Grafana dashboard k6 run --out influxdb=http://localhost:8086/k6 k6-load-test.js ``` ### 6.2 Database Performance Tests ```go func BenchmarkGetUserByID(b *testing.B) { db := setupTestDB(b) repo := NewUserRepository(db) // Create test user user := &User{Email: "bench@example.com", Username: "benchuser"} repo.Create(user) b.ResetTimer() for i := 0; i < b.N; i++ { _, err := repo.GetByID(user.ID) if err != nil { b.Fatal(err) } } } func BenchmarkGetUserByEmail(b *testing.B) { db := setupTestDB(b) repo := NewUserRepository(db) user := &User{Email: "bench@example.com", Username: "benchuser"} repo.Create(user) b.ResetTimer() for i := 0; i < b.N; i++ { _, err := repo.GetByEmail("bench@example.com") if err != nil { b.Fatal(err) } } } // Run benchmarks // go test -bench=. -benchmem // BenchmarkGetUserByID-8 50000 25000 ns/op 1024 B/op 15 allocs/op // BenchmarkGetUserByEmail-8 45000 28000 ns/op 1152 B/op 17 allocs/op ``` ## 7. SECURITY TESTING ### 7.1 SAST (Static Application Security Testing) **Tools**: - **Go**: `gosec`, `nancy` (dependency scanning) - **Rust**: `cargo audit`, `cargo-geiger` - **TypeScript**: `npm audit`, `snyk` **CI/CD Integration**: ```yaml # .github/workflows/security-scan.yml name: Security Scan on: [push, pull_request] jobs: security: runs-on: ubuntu-latest steps: - uses: actions/checkout@v3 # Go security scan - name: Run gosec uses: securego/gosec@master with: args: '-fmt=sarif -out=results.sarif ./...' # Rust dependency audit - name: Cargo audit run: cargo audit # Node.js audit - name: NPM audit run: npm audit --production # Snyk scan - name: Run Snyk uses: snyk/actions/node@master env: SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }} ``` ### 7.2 DAST (Dynamic Application Security Testing) **OWASP ZAP Automated Scan**: ```bash # Start application docker-compose up -d # Run ZAP baseline scan docker run -t owasp/zap2docker-stable zap-baseline.py \ -t https://staging.veza.app \ -r zap-report.html # Run ZAP full scan docker run -t owasp/zap2docker-stable zap-full-scan.py \ -t https://staging.veza.app \ -r zap-full-report.html ``` ### 7.3 Penetration Testing **Automated Tests**: ```typescript import { test, expect } from '@playwright/test'; test.describe('Security Tests', () => { test('should prevent SQL injection', async ({ request }) => { // Try SQL injection in email field const response = await request.post('/api/v1/auth/login', { data: { email: "admin' OR '1'='1", password: "anything", }, }); // Should return 400 validation error, not 200 expect(response.status()).toBe(400); const data = await response.json(); expect(data.error.code).toBe(4001); // Invalid format }); test('should prevent XSS', async ({ page }) => { await page.goto('/register'); // Try XSS in username field await page.fill('input[name="username"]', ''); await page.fill('input[name="email"]', 'test@example.com'); await page.fill('input[name="password"]', 'SecurePass123!'); await page.click('button[type="submit"]'); // Should show validation error await expect(page.locator('text=Invalid username format')).toBeVisible(); }); test('should enforce rate limiting', async ({ request }) => { // Send 10 requests quickly const promises = Array.from({ length: 10 }, () => request.post('/api/v1/auth/login', { data: { email: 'test@example.com', password: 'wrong' }, }) ); const responses = await Promise.all(promises); // At least one should be rate limited (429) const rateLimited = responses.filter(r => r.status() === 429); expect(rateLimited.length).toBeGreaterThan(0); }); }); ``` ## 8. LOAD & STRESS TESTING ### 8.1 Load Testing Scenarios **Scenario 1: Normal Load (1000 concurrent users)**: ```javascript // k6-normal-load.js export const options = { vus: 1000, duration: '10m', thresholds: { http_req_duration: ['p(95)<100', 'p(99)<200'], http_req_failed: ['rate<0.01'], }, }; export default function () { // Simulate realistic user behavior http.get('https://api.veza.app/v1/tracks'); sleep(Math.random() * 3 + 2); // 2-5s http.get('https://api.veza.app/v1/tracks/track-123'); sleep(Math.random() * 5 + 5); // 5-10s http.post('https://api.veza.app/v1/tracks/track-123/play'); sleep(Math.random() * 180 + 60); // 1-4min (listening to track) } ``` **Scenario 2: Peak Load (5000 concurrent users)**: ```javascript export const options = { stages: [ { duration: '5m', target: 5000 }, { duration: '10m', target: 5000 }, { duration: '5m', target: 0 }, ], }; ``` **Scenario 3: Stress Test (Find breaking point)**: ```javascript export const options = { stages: [ { duration: '2m', target: 1000 }, { duration: '5m', target: 1000 }, { duration: '2m', target: 2000 }, { duration: '5m', target: 2000 }, { duration: '2m', target: 5000 }, { duration: '5m', target: 5000 }, { duration: '2m', target: 10000 }, // Push to breaking point { duration: '5m', target: 10000 }, { duration: '10m', target: 0 }, ], }; ``` ### 8.2 Spike Testing ```javascript // k6-spike-test.js export const options = { stages: [ { duration: '1m', target: 100 }, // Normal load { duration: '10s', target: 10000 }, // Sudden spike { duration: '3m', target: 10000 }, // Stay at spike { duration: '10s', target: 100 }, // Drop back { duration: '1m', target: 100 }, // Recover ], }; ``` ## 9. TEST DATA MANAGEMENT ### 9.1 Test Fixtures **Go Test Fixtures**: ```go // testdata/fixtures.go package testdata func CreateTestUser() *User { return &User{ ID: uuid.New().String(), Email: "test@example.com", Username: "testuser", Role: "user", } } func CreateTestTrack() *Track { return &Track{ ID: uuid.New().String(), Title: "Test Track", Artist: "Test Artist", Genre: "electronic", DurationSeconds: 240, } } ``` **TypeScript Fixtures**: ```typescript // tests/fixtures/users.ts export const testUsers = { normalUser: { id: 'user-123', email: 'user@example.com', username: 'testuser', role: 'user', }, premiumUser: { id: 'user-456', email: 'premium@example.com', username: 'premiumuser', role: 'premium', }, admin: { id: 'user-789', email: 'admin@example.com', username: 'admin', role: 'admin', }, }; ``` ### 9.2 Test Database Seeding ```sql -- tests/seed.sql -- Seed database with test data INSERT INTO users (id, email, username, password_hash, role) VALUES ('user-123', 'user@example.com', 'testuser', 'hashed_password', 'user'), ('user-456', 'premium@example.com', 'premiumuser', 'hashed_password', 'premium'), ('user-789', 'admin@example.com', 'admin', 'hashed_password', 'admin'); INSERT INTO tracks (id, user_id, title, artist, genre, duration_seconds) VALUES ('track-123', 'user-123', 'Test Track 1', 'Test Artist', 'electronic', 240), ('track-456', 'user-123', 'Test Track 2', 'Test Artist', 'rock', 180), ('track-789', 'user-456', 'Premium Track', 'Premium Artist', 'jazz', 300); ``` ### 9.3 Cleanup Strategy ```go func TestMain(m *testing.M) { // Setup db := setupTestDB() // Run tests code := m.Run() // Cleanup db.Exec("TRUNCATE TABLE users CASCADE") db.Exec("TRUNCATE TABLE tracks CASCADE") db.Exec("TRUNCATE TABLE playlists CASCADE") os.Exit(code) } func setupTest(t *testing.T, db *gorm.DB) { t.Cleanup(func() { db.Exec("DELETE FROM users") db.Exec("DELETE FROM tracks") }) } ``` ## 10. CI/CD PIPELINE TESTING ### 10.1 GitHub Actions Workflow ```yaml # .github/workflows/test.yml name: Test Suite on: push: branches: [main, develop] pull_request: branches: [main, develop] jobs: unit-tests: runs-on: ubuntu-latest timeout-minutes: 10 steps: - uses: actions/checkout@v3 # Go tests - name: Setup Go uses: actions/setup-go@v4 with: go-version: '1.21' - name: Run Go tests run: | cd veza-backend-api go test ./... -v -coverprofile=coverage.out go tool cover -func=coverage.out - name: Check coverage run: | coverage=$(go tool cover -func=coverage.out | grep total | awk '{print $3}' | sed 's/%//') if (( $(echo "$coverage < 80" | bc -l) )); then echo "Coverage $coverage% is below 80%" exit 1 fi # Rust tests - name: Setup Rust uses: actions-rs/toolchain@v1 with: toolchain: stable - name: Run Rust tests run: | cd veza-chat-server cargo test --all-features # TypeScript tests - name: Setup Node uses: actions/setup-node@v3 with: node-version: '20' - name: Run TypeScript tests run: | cd apps/web npm ci npm run test -- --coverage - name: Upload coverage uses: codecov/codecov-action@v3 with: files: ./coverage/coverage-final.json integration-tests: runs-on: ubuntu-latest timeout-minutes: 15 services: postgres: image: postgres:15-alpine env: POSTGRES_PASSWORD: postgres options: >- --health-cmd pg_isready --health-interval 10s --health-timeout 5s --health-retries 5 redis: image: redis:7-alpine options: >- --health-cmd "redis-cli ping" --health-interval 10s --health-timeout 5s --health-retries 5 steps: - uses: actions/checkout@v3 - name: Run integration tests env: DATABASE_URL: postgres://postgres:postgres@localhost:5432/test_db REDIS_URL: redis://localhost:6379 run: | cd veza-backend-api go test ./tests/integration/... -v e2e-tests: runs-on: ubuntu-latest timeout-minutes: 20 steps: - uses: actions/checkout@v3 - name: Install Playwright run: | cd apps/web npm ci npx playwright install --with-deps - name: Start services run: docker-compose up -d - name: Wait for services run: | timeout 60 bash -c 'until curl -f http://localhost:8080/health; do sleep 2; done' - name: Run E2E tests run: | cd apps/web npx playwright test - name: Upload test results if: always() uses: actions/upload-artifact@v3 with: name: playwright-report path: apps/web/playwright-report/ ``` ## 11. TEST AUTOMATION ### 11.1 Pre-Commit Hooks (Husky) ```json // package.json { "scripts": { "test": "vitest run", "test:unit": "vitest run --filter=unit", "lint": "eslint . --ext .ts,.tsx", "format": "prettier --write ." }, "lint-staged": { "*.{ts,tsx}": [ "eslint --fix", "prettier --write", "vitest related --run" ] }, "husky": { "hooks": { "pre-commit": "lint-staged", "pre-push": "npm test" } } } ``` ### 11.2 Test Reporting **JUnit XML Report**: ```bash # Go go test ./... -v 2>&1 | go-junit-report > report.xml # Vitest vitest run --reporter=junit --outputFile=report.xml ``` **HTML Coverage Report**: ```bash # Go go test ./... -coverprofile=coverage.out go tool cover -html=coverage.out -o coverage.html # Vitest vitest run --coverage # Opens coverage/index.html ``` ## 12. QUALITY GATES ### 12.1 Merge Requirements **Before PR can be merged**: - [ ] All tests pass (unit, integration, E2E) - [ ] Coverage ≄ 80% for new/modified files - [ ] No linter errors (warnings ≀ 5) - [ ] No security vulnerabilities (high/critical) - [ ] Performance benchmarks pass (no regression > 10%) - [ ] Code review approved (≄ 2 reviewers) - [ ] Documentation updated ### 12.2 Release Quality Gates **Before release to production**: - [ ] All tests pass (including load tests) - [ ] Coverage ≄ 80% (overall) - [ ] Zero known high/critical security vulnerabilities - [ ] Performance targets met (p95 < 100ms) - [ ] Load test passed (5000 concurrent users) - [ ] E2E tests passed (all critical flows) - [ ] Security scan passed (OWASP ZAP) - [ ] Smoke tests passed in staging ## ✅ CHECKLIST DE VALIDATION ### Test Coverage - [ ] Unit tests ≄ 80% line coverage - [ ] Integration tests cover all API endpoints - [ ] E2E tests cover all critical user flows - [ ] Performance tests for all API endpoints - [ ] Security tests (SAST + DAST) ### Test Quality - [ ] Tests are fast (unit < 2min, integration < 5min, E2E < 10min) - [ ] Tests are deterministic (no flaky tests) - [ ] Tests are isolated (no shared state) - [ ] Tests are readable (clear AAA structure) - [ ] Tests are maintainable (DRY, good naming) ### CI/CD Integration - [ ] Tests run on every commit - [ ] Coverage reported to Codecov - [ ] Quality gates enforced - [ ] Test results visible in PR - [ ] Failed tests block merge ### Documentation - [ ] Testing strategy documented - [ ] Test fixtures documented - [ ] CI/CD pipeline documented - [ ] Quality gates documented ## 📊 MÉTRIQUES DE SUCCÈS ### Coverage Metrics - **Unit Test Coverage**: ≄ 80% - **Integration Test Coverage**: ≄ 70% - **E2E Test Coverage**: ≄ 50% (critical flows 100%) ### Performance Metrics - **Unit Tests**: < 2 minutes - **Integration Tests**: < 5 minutes - **E2E Tests**: < 10 minutes (smoke), < 30 minutes (full suite) - **Load Tests**: < 15 minutes ### Quality Metrics - **Test Flakiness**: < 1% - **Bug Escape Rate**: < 0.1% (bugs found in production) - **Test Maintenance Time**: < 10% of development time ### CI/CD Metrics - **Build Success Rate**: > 95% - **Test Execution Time**: < 10 minutes (average) - **Deployment Frequency**: Multiple per day - **Mean Time to Recovery (MTTR)**: < 1 hour ## 🔄 HISTORIQUE DES VERSIONS | Version | Date | Changements | |---------|------|-------------| | 1.0.0 | 2025-11-02 | Version initiale - StratĂ©gie de testing complĂšte | --- ## ⚠ AVERTISSEMENT **CETTE STRATÉGIE EST IMMUABLE** La stratĂ©gie de testing dĂ©finie ici est **VERROUILLÉE**. Toute modification nĂ©cessite: 1. **Testing Review Board** (QA Lead, Tech Leads, CTO) 2. **Impact Analysis** (CI/CD, coverage, quality gates) 3. **Team Consensus** (vote Ă  majoritĂ© 75%) 4. **Migration Plan** (update tests, CI/CD, docs) **Seules exceptions autorisĂ©es**: - Nouvelles mĂ©thodologies de testing (amĂ©lioration) - Outils de testing plus performants - Nouvelles exigences de compliance **Modifications interdites**: - RĂ©duction coverage minimum (< 80%) - Suppression quality gates - DĂ©sactivation tests en CI/CD - TolĂ©rance flaky tests **"Tests are the safety net that allows rapid development."** --- **Document créé par**: QA Team + Tech Leads **Date de crĂ©ation**: 2025-11-02 **Prochaine rĂ©vision**: Quarterly (2026-02-01) **PropriĂ©taire**: QA Lead **Statut**: ✅ **APPROUVÉ ET VERROUILLÉ**