veza/veza-docs/ORIGIN/ORIGIN_TESTING_STRATEGY.md
2026-03-05 19:22:31 +01:00

1776 lines
50 KiB
Markdown
Raw Blame History

This file contains ambiguous Unicode characters

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# ORIGIN_TESTING_STRATEGY.md
## 📋 RÉSUMÉ EXÉCUTIF
Ce document définit la stratégie de testing complète et définitive pour la plateforme Veza. Il couvre tous les types de tests (unit, integration, E2E, performance, security, load) avec couverture minimale 80%, outils standardisés, processus CI/CD automatisé, et méthodologies (TDD, BDD). Cette stratégie garantit qualité production, réduction bugs, et confiance déploiement continu sur 24 mois.
## 🎯 OBJECTIFS
### Objectif Principal
Établir une culture de testing exhaustive avec couverture ≥ 80%, automatisation complète CI/CD, et détection précoce de régressions pour assurer déploiements production sans risque et maintenance long-terme facilitée.
### Objectifs Secondaires
- Réduire taux de bugs en production (< 0.1% des releases)
- Accélérer feedback loop (tests < 10 min en CI/CD)
- Faciliter refactoring avec confiance (tests comme safety net)
- Documenter comportement attendu via tests
- Détecter problèmes performance avant production
## 📖 TABLE DES MATIÈRES
1. [Testing Philosophy](#1-testing-philosophy)
2. [Test Types & Coverage](#2-test-types--coverage)
3. [Unit Testing](#3-unit-testing)
4. [Integration Testing](#4-integration-testing)
5. [End-to-End Testing](#5-end-to-end-testing)
6. [Performance Testing](#6-performance-testing)
7. [Security Testing](#7-security-testing)
8. [Load & Stress Testing](#8-load--stress-testing)
9. [Test Data Management](#9-test-data-management)
10. [CI/CD Pipeline Testing](#10-cicd-pipeline-testing)
11. [Test Automation](#11-test-automation)
12. [Quality Gates](#12-quality-gates)
13. [Tests Post-Déploiement (Smoke Tests)](#13-tests-post-déploiement-smoke-tests)
14. [Tests de l'Algorithme de Découverte](#14-tests-de-lalgorithme-de-découverte)
15. [Stratégie de Test Éthique](#15-stratégie-de-test-éthique)
## 🔒 RÈGLES IMMUABLES
1. **Coverage Minimum**: 80% line coverage pour nouveau code - CI/CD bloque si < 80%
2. **Test Before Merge**: Aucun PR mergé sans tests - aucune exception
3. **Test Isolation**: Chaque test indépendant (no shared state, no order dependency)
4. **Fast Feedback**: Tests unitaires < 2 min, integration < 5 min, E2E < 10 min
5. **Deterministic**: Tests 100% reproductibles (no flaky tests tolérés)
6. **Test Data**: Fixtures isolées, cleanup automatique (no test pollution)
7. **Mocking**: Dependencies externes mockées (DB, APIs, time)
8. **Regression**: Bug fix = nouveau test (prevent recurrence)
9. **Documentation**: Tests servent de documentation (readable, self-explanatory)
10. **Performance**: Tests de performance en CI/CD (regression detection)
## 1. TESTING PHILOSOPHY
### 1.1 Testing Pyramid
```
╱╲
E2E ╲ 10% - Slow, Brittle, High Cost
╱────────╲
Integration ╲ 20% - Medium Speed, Medium Cost
╱──────────────╲
Unit Tests ╲ 70% - Fast, Reliable, Low Cost
╱────────────────────╲
```
**Distribution**:
- **Unit Tests (70%)**: Fast, isolated, test single functions/methods
- **Integration Tests (20%)**: Medium speed, test component interactions
- **E2E Tests (10%)**: Slow, test complete user flows
### 1.2 Testing Principles
**F.I.R.S.T Principles**:
- **Fast**: Tests execute quickly (unit < 100ms, all tests < 10min)
- **Independent**: No test depends on another (can run in any order)
- **Repeatable**: Same result every time (deterministic)
- **Self-Validating**: Pass/fail clear (no manual inspection)
- **Timely**: Written with code (not after, preferably before - TDD)
**Test Qualities**:
- **Readable**: Anyone can understand what's being tested
- **Maintainable**: Easy to update when requirements change
- **Trustworthy**: No false positives/negatives (no flaky tests)
- **Comprehensive**: Cover happy path + edge cases + error cases
### 1.3 TDD (Test-Driven Development)
**Red-Green-Refactor Cycle**:
```
1. 🔴 RED: Write failing test first
2. 🟢 GREEN: Write minimum code to pass
3. 🔵 REFACTOR: Clean up code while tests pass
4. Repeat
```
**Example Flow**:
```go
// 1. RED - Write failing test
func TestCreateUser_Success(t *testing.T) {
service := NewUserService()
user := &User{Email: "test@example.com"}
err := service.CreateUser(user)
assert.NoError(t, err)
assert.NotEmpty(t, user.ID)
}
// Test fails - CreateUser doesn't exist yet
// 2. GREEN - Implement minimum code
func (s *UserService) CreateUser(user *User) error {
user.ID = uuid.New().String()
return nil
}
// Test passes
// 3. REFACTOR - Improve code
func (s *UserService) CreateUser(user *User) error {
if err := validateEmail(user.Email); err != nil {
return err
}
user.ID = uuid.New().String()
return s.repo.Save(user)
}
// Tests still pass
```
## 2. TEST TYPES & COVERAGE
### 2.1 Coverage Requirements
| Test Type | Coverage Target | Execution Time | Frequency |
|-----------|----------------|----------------|-----------|
| **Unit Tests** | 80% line coverage | < 2 min | Every commit |
| **Integration Tests** | 70% API endpoints | < 5 min | Every commit |
| **E2E Tests** | 50% critical flows | < 10 min | Every commit (smoke), Full nightly |
| **Performance Tests** | 100% critical endpoints | < 15 min | Nightly + Pre-release |
| **Security Tests** | 100% OWASP Top 10 | < 20 min | Weekly + Pre-release |
| **Load Tests** | 100% production scenarios | 30-60 min | Weekly + Pre-release |
### 2.2 Coverage Metrics
**Tracked Metrics**:
- **Line Coverage**: % of lines executed during tests
- **Branch Coverage**: % of conditional branches tested
- **Function Coverage**: % of functions called
- **Statement Coverage**: % of statements executed
**Tools**:
- **Go**: `go test -cover`, `gocov`
- **Rust**: `cargo tarpaulin`
- **TypeScript**: `vitest --coverage` (c8)
**Example Coverage Report**:
```bash
# Go
go test ./... -coverprofile=coverage.out
go tool cover -html=coverage.out -o coverage.html
# Coverage output:
# internal/services/user_service.go: CreateUser 85.7%
# internal/services/track_service.go: UploadTrack 92.3%
# internal/handlers/auth_handlers.go: Login 78.5% ⚠️ Below 80%
```
## 3. UNIT TESTING
### 3.1 Unit Test Structure (AAA Pattern)
**Arrange-Act-Assert**:
```go
func TestUserService_CreateUser_Success(t *testing.T) {
// ARRANGE - Setup test data and dependencies
mockRepo := &MockUserRepository{}
mockRepo.CreateFunc = func(user *User) error {
return nil
}
service := NewUserService(mockRepo)
user := &User{
Email: "test@example.com",
Username: "testuser",
}
// ACT - Execute the function under test
err := service.CreateUser(user)
// ASSERT - Verify the outcome
assert.NoError(t, err)
assert.NotEmpty(t, user.ID)
assert.NotEmpty(t, user.CreatedAt)
}
```
### 3.2 Test Naming Convention
**Format**: `Test<FunctionName>_<Scenario>_<ExpectedBehavior>`
**Examples**:
```go
// ✅ GOOD - Descriptive test names
func TestCreateUser_ValidData_Success(t *testing.T) { }
func TestCreateUser_DuplicateEmail_ReturnsConflictError(t *testing.T) { }
func TestCreateUser_InvalidEmail_ReturnsValidationError(t *testing.T) { }
func TestGetUserByID_ExistingUser_ReturnsUser(t *testing.T) { }
func TestGetUserByID_NonExistentUser_ReturnsNotFoundError(t *testing.T) { }
// ❌ BAD - Unclear test names
func TestCreateUser(t *testing.T) { }
func TestUser(t *testing.T) { }
func TestSuccess(t *testing.T) { }
```
### 3.3 Table-Driven Tests
```go
func TestValidateEmail(t *testing.T) {
tests := []struct {
name string
email string
wantErr bool
errMsg string
}{
{
name: "valid email",
email: "user@example.com",
wantErr: false,
},
{
name: "valid email with subdomain",
email: "user@mail.example.com",
wantErr: false,
},
{
name: "missing @ symbol",
email: "userexample.com",
wantErr: true,
errMsg: "invalid email format",
},
{
name: "missing domain",
email: "user@",
wantErr: true,
errMsg: "invalid email format",
},
{
name: "empty string",
email: "",
wantErr: true,
errMsg: "email is required",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
err := validateEmail(tt.email)
if tt.wantErr {
assert.Error(t, err)
if tt.errMsg != "" {
assert.Contains(t, err.Error(), tt.errMsg)
}
} else {
assert.NoError(t, err)
}
})
}
}
```
### 3.4 Mocking (Go)
**Interface-Based Mocking**:
```go
// Production interface
type UserRepository interface {
Create(user *User) error
GetByID(id string) (*User, error)
GetByEmail(email string) (*User, error)
Update(user *User) error
Delete(id string) error
}
// Mock implementation
type MockUserRepository struct {
CreateFunc func(user *User) error
GetByIDFunc func(id string) (*User, error)
GetByEmailFunc func(email string) (*User, error)
UpdateFunc func(user *User) error
DeleteFunc func(id string) error
}
func (m *MockUserRepository) Create(user *User) error {
if m.CreateFunc != nil {
return m.CreateFunc(user)
}
return nil
}
func (m *MockUserRepository) GetByID(id string) (*User, error) {
if m.GetByIDFunc != nil {
return m.GetByIDFunc(id)
}
return nil, errors.New("not implemented")
}
// Test usage
func TestUserService_CreateUser(t *testing.T) {
mockRepo := &MockUserRepository{
CreateFunc: func(user *User) error {
user.ID = "mock-id-123"
return nil
},
GetByEmailFunc: func(email string) (*User, error) {
return nil, gorm.ErrRecordNotFound // Simulate no existing user
},
}
service := NewUserService(mockRepo)
user := &User{Email: "test@example.com"}
err := service.CreateUser(user)
assert.NoError(t, err)
assert.Equal(t, "mock-id-123", user.ID)
}
```
### 3.5 Testing Async Code (TypeScript)
```typescript
import { describe, it, expect, vi } from 'vitest';
describe('UserService', () => {
it('should create user successfully', async () => {
// ARRANGE
const mockRepo = {
create: vi.fn().mockResolvedValue({ id: 'user-123' }),
getByEmail: vi.fn().mockResolvedValue(null),
};
const service = new UserService(mockRepo);
const user = { email: 'test@example.com', username: 'testuser' };
// ACT
const result = await service.createUser(user);
// ASSERT
expect(result).toHaveProperty('id');
expect(mockRepo.create).toHaveBeenCalledWith(
expect.objectContaining({ email: 'test@example.com' })
);
});
it('should handle errors gracefully', async () => {
// ARRANGE
const mockRepo = {
create: vi.fn().mockRejectedValue(new Error('Database error')),
getByEmail: vi.fn().mockResolvedValue(null),
};
const service = new UserService(mockRepo);
// ACT & ASSERT
await expect(service.createUser({ email: 'test@example.com' }))
.rejects
.toThrow('Database error');
});
});
```
## 4. INTEGRATION TESTING
### 4.1 API Integration Tests (Go)
**Test with Real Database (Testcontainers)**:
```go
import (
"testing"
"github.com/testcontainers/testcontainers-go"
"github.com/testcontainers/testcontainers-go/wait"
)
func setupTestDB(t *testing.T) *gorm.DB {
ctx := context.Background()
// Start PostgreSQL container
req := testcontainers.ContainerRequest{
Image: "postgres:15-alpine",
ExposedPorts: []string{"5432/tcp"},
Env: map[string]string{
"POSTGRES_DB": "test_db",
"POSTGRES_USER": "test",
"POSTGRES_PASSWORD": "test",
},
WaitingFor: wait.ForLog("database system is ready to accept connections"),
}
container, err := testcontainers.GenericContainer(ctx, testcontainers.GenericContainerRequest{
ContainerRequest: req,
Started: true,
})
require.NoError(t, err)
t.Cleanup(func() {
container.Terminate(ctx)
})
// Get connection string
host, _ := container.Host(ctx)
port, _ := container.MappedPort(ctx, "5432")
dsn := fmt.Sprintf("host=%s port=%s user=test password=test dbname=test_db sslmode=disable", host, port.Port())
// Connect and migrate
db, err := gorm.Open(postgres.Open(dsn), &gorm.Config{})
require.NoError(t, err)
db.AutoMigrate(&User{}, &Track{}, &Playlist{})
return db
}
func TestUserAPI_CreateUser_Integration(t *testing.T) {
// ARRANGE
db := setupTestDB(t)
router := setupRouter(db)
payload := map[string]interface{}{
"email": "test@example.com",
"username": "testuser",
"password": "SecurePass123!",
}
body, _ := json.Marshal(payload)
req := httptest.NewRequest("POST", "/api/v1/users", bytes.NewReader(body))
req.Header.Set("Content-Type", "application/json")
w := httptest.NewRecorder()
// ACT
router.ServeHTTP(w, req)
// ASSERT
assert.Equal(t, 201, w.Code)
var response map[string]interface{}
json.Unmarshal(w.Body.Bytes(), &response)
assert.Contains(t, response, "data")
userData := response["data"].(map[string]interface{})
assert.Equal(t, "test@example.com", userData["email"])
assert.NotEmpty(t, userData["id"])
// Verify in database
var user User
err := db.First(&user, "email = ?", "test@example.com").Error
assert.NoError(t, err)
assert.Equal(t, "testuser", user.Username)
}
```
### 4.2 Database Integration Tests (Rust)
Tous les services Rust (chat-server, stream-server) doivent être testés avec une base de données réelle via `sqlx::test`. Les tests utilisent une base PostgreSQL temporaire créée et détruite automatiquement par le macro `#[sqlx::test]`.
**Test CRUD de base** :
```rust
use sqlx::PgPool;
#[sqlx::test]
async fn test_create_user(pool: PgPool) -> sqlx::Result<()> {
// ARRANGE
let user = User {
id: Uuid::new_v4(),
email: "test@example.com".to_string(),
username: "testuser".to_string(),
password_hash: "hashed_password".to_string(),
};
// ACT
let result = sqlx::query!(
"INSERT INTO users (id, email, username, password_hash) VALUES ($1, $2, $3, $4)",
user.id,
user.email,
user.username,
user.password_hash
)
.execute(&pool)
.await?;
// ASSERT
assert_eq!(result.rows_affected(), 1);
let fetched_user = sqlx::query_as!(
User,
"SELECT id, email, username, password_hash, created_at FROM users WHERE id = $1",
user.id
)
.fetch_one(&pool)
.await?;
assert_eq!(fetched_user.email, "test@example.com");
assert_eq!(fetched_user.username, "testuser");
Ok(())
}
```
**Tests transactionnels et contraintes d'intégrité** :
```rust
#[sqlx::test]
async fn test_unique_email_constraint(pool: PgPool) -> sqlx::Result<()> {
let id1 = Uuid::new_v4();
let id2 = Uuid::new_v4();
sqlx::query!(
"INSERT INTO users (id, email, username, password_hash) VALUES ($1, $2, $3, $4)",
id1, "duplicate@example.com", "user1", "hash1"
)
.execute(&pool)
.await?;
let result = sqlx::query!(
"INSERT INTO users (id, email, username, password_hash) VALUES ($1, $2, $3, $4)",
id2, "duplicate@example.com", "user2", "hash2"
)
.execute(&pool)
.await;
assert!(result.is_err(), "Duplicate email must be rejected");
Ok(())
}
#[sqlx::test]
async fn test_cascade_delete_user_tracks(pool: PgPool) -> sqlx::Result<()> {
let user_id = Uuid::new_v4();
let track_id = Uuid::new_v4();
sqlx::query!("INSERT INTO users (id, email, username, password_hash) VALUES ($1, $2, $3, $4)",
user_id, "artist@example.com", "artist", "hash")
.execute(&pool).await?;
sqlx::query!("INSERT INTO tracks (id, user_id, title, genre, duration_seconds) VALUES ($1, $2, $3, $4, $5)",
track_id, user_id, "My Song", "electronic", 240)
.execute(&pool).await?;
sqlx::query!("DELETE FROM users WHERE id = $1", user_id)
.execute(&pool).await?;
let track = sqlx::query!("SELECT id FROM tracks WHERE id = $1", track_id)
.fetch_optional(&pool).await?;
assert!(track.is_none(), "Tracks must be cascade-deleted with user");
Ok(())
}
```
**CI Integration** : Les tests Rust avec DB tournent dans le job `integration-tests` du workflow GitHub Actions, avec un service PostgreSQL dédié (cf. section 10.1).
```
### 4.3 API Contract Testing
**OpenAPI Schema Validation**:
```typescript
import { describe, it, expect } from 'vitest';
import { validateAgainstSchema } from './openapi-validator';
describe('API Contract Tests', () => {
it('POST /users response matches OpenAPI schema', async () => {
const response = await fetch('http://localhost:8080/api/v1/users', {
method: 'POST',
body: JSON.stringify({
email: 'test@example.com',
username: 'testuser',
password: 'SecurePass123!',
}),
headers: { 'Content-Type': 'application/json' },
});
const data = await response.json();
// Validate against OpenAPI schema
const validation = validateAgainstSchema(
'POST',
'/users',
201,
data
);
expect(validation.valid).toBe(true);
expect(validation.errors).toHaveLength(0);
});
});
```
## 5. END-TO-END TESTING
### 5.1 E2E Testing with Playwright
**Setup**:
```typescript
// playwright.config.ts
import { defineConfig } from '@playwright/test';
export default defineConfig({
testDir: './tests/e2e',
timeout: 30000,
retries: 2,
use: {
baseURL: 'http://localhost:3000',
screenshot: 'only-on-failure',
video: 'retain-on-failure',
trace: 'on-first-retry',
},
projects: [
{ name: 'chromium', use: { browserName: 'chromium' } },
{ name: 'firefox', use: { browserName: 'firefox' } },
{ name: 'webkit', use: { browserName: 'webkit' } },
],
webServer: {
command: 'npm run dev',
port: 3000,
reuseExistingServer: !process.env.CI,
},
});
```
**E2E Test Example**:
```typescript
import { test, expect } from '@playwright/test';
test.describe('User Registration Flow', () => {
test('should register new user successfully', async ({ page }) => {
// Navigate to registration page
await page.goto('/register');
// Fill registration form
await page.fill('input[name="email"]', 'newuser@example.com');
await page.fill('input[name="username"]', 'newuser');
await page.fill('input[name="password"]', 'SecurePass123!');
await page.fill('input[name="confirmPassword"]', 'SecurePass123!');
// Submit form
await page.click('button[type="submit"]');
// Wait for redirect to dashboard
await page.waitForURL('/dashboard');
// Verify user is logged in
await expect(page.locator('text=Welcome, newuser')).toBeVisible();
// Verify email verification notice
await expect(page.locator('text=Please verify your email')).toBeVisible();
});
test('should show validation errors for invalid data', async ({ page }) => {
await page.goto('/register');
// Submit empty form
await page.click('button[type="submit"]');
// Verify error messages
await expect(page.locator('text=Email is required')).toBeVisible();
await expect(page.locator('text=Username is required')).toBeVisible();
await expect(page.locator('text=Password is required')).toBeVisible();
});
});
test.describe('Track Upload Flow', () => {
test('should upload track successfully', async ({ page }) => {
// Login first
await page.goto('/login');
await page.fill('input[name="email"]', 'creator@example.com');
await page.fill('input[name="password"]', 'password123');
await page.click('button[type="submit"]');
await page.waitForURL('/dashboard');
// Navigate to upload page
await page.goto('/upload');
// Upload file
await page.setInputFiles('input[type="file"]', './fixtures/test-track.mp3');
// Fill track details
await page.fill('input[name="title"]', 'Test Track');
await page.fill('textarea[name="description"]', 'This is a test track');
await page.selectOption('select[name="genre"]', 'electronic');
// Submit
await page.click('button[type="submit"]');
// Wait for processing message
await expect(page.locator('text=Processing your track')).toBeVisible();
// Verify redirect to track page
await page.waitForURL(/\/tracks\/[a-z0-9-]+/, { timeout: 60000 });
// Verify track details
await expect(page.locator('h1', { hasText: 'Test Track' })).toBeVisible();
});
});
```
### 5.2 Visual Regression Testing
```typescript
import { test, expect } from '@playwright/test';
test('homepage visual regression', async ({ page }) => {
await page.goto('/');
// Take screenshot and compare with baseline
await expect(page).toHaveScreenshot('homepage.png', {
fullPage: true,
maxDiffPixels: 100, // Allow small differences
});
});
test('user profile visual regression', async ({ page }) => {
await page.goto('/users/johndoe');
// Hide dynamic elements (timestamps, play counts)
await page.locator('.last-activity').evaluate(el => el.style.visibility = 'hidden');
await page.locator('.play-count').evaluate(el => el.style.visibility = 'hidden');
await expect(page).toHaveScreenshot('user-profile.png');
});
```
## 6. PERFORMANCE TESTING
### 6.1 Performance Tests with k6
**Load Testing Script**:
```javascript
// k6-load-test.js
import http from 'k6/http';
import { check, sleep } from 'k6';
export const options = {
stages: [
{ duration: '2m', target: 100 }, // Ramp up to 100 users
{ duration: '5m', target: 100 }, // Stay at 100 users
{ duration: '2m', target: 200 }, // Ramp up to 200 users
{ duration: '5m', target: 200 }, // Stay at 200 users
{ duration: '2m', target: 0 }, // Ramp down to 0
],
thresholds: {
http_req_duration: ['p(95)<100'], // 95% of requests < 100ms
http_req_failed: ['rate<0.01'], // < 1% failure rate
},
};
export default function () {
// Test GET /tracks
const tracksRes = http.get('https://api.veza.app/v1/tracks');
check(tracksRes, {
'status is 200': (r) => r.status === 200,
'response time < 100ms': (r) => r.timings.duration < 100,
});
sleep(1);
// Test GET /tracks/{id}
const trackRes = http.get('https://api.veza.app/v1/tracks/track-123');
check(trackRes, {
'status is 200': (r) => r.status === 200,
'response time < 50ms': (r) => r.timings.duration < 50,
});
sleep(1);
}
```
**Run k6 Tests**:
```bash
# Local test
k6 run k6-load-test.js
# Cloud test (k6 Cloud)
k6 cloud k6-load-test.js
# Output to InfluxDB for Grafana dashboard
k6 run --out influxdb=http://localhost:8086/k6 k6-load-test.js
```
### 6.2 Database Performance Tests
```go
func BenchmarkGetUserByID(b *testing.B) {
db := setupTestDB(b)
repo := NewUserRepository(db)
// Create test user
user := &User{Email: "bench@example.com", Username: "benchuser"}
repo.Create(user)
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := repo.GetByID(user.ID)
if err != nil {
b.Fatal(err)
}
}
}
func BenchmarkGetUserByEmail(b *testing.B) {
db := setupTestDB(b)
repo := NewUserRepository(db)
user := &User{Email: "bench@example.com", Username: "benchuser"}
repo.Create(user)
b.ResetTimer()
for i := 0; i < b.N; i++ {
_, err := repo.GetByEmail("bench@example.com")
if err != nil {
b.Fatal(err)
}
}
}
// Run benchmarks
// go test -bench=. -benchmem
// BenchmarkGetUserByID-8 50000 25000 ns/op 1024 B/op 15 allocs/op
// BenchmarkGetUserByEmail-8 45000 28000 ns/op 1152 B/op 17 allocs/op
```
## 7. SECURITY TESTING
### 7.1 SAST (Static Application Security Testing)
**Tools**:
- **Go**: `gosec`, `nancy` (dependency scanning)
- **Rust**: `cargo audit`, `cargo-geiger`
- **TypeScript**: `npm audit`, `snyk`
**CI/CD Integration**:
```yaml
# .github/workflows/security-scan.yml
name: Security Scan
on: [push, pull_request]
jobs:
security:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
# Go security scan
- name: Run gosec
uses: securego/gosec@master
with:
args: '-fmt=sarif -out=results.sarif ./...'
# Rust dependency audit
- name: Cargo audit
run: cargo audit
# Node.js audit
- name: NPM audit
run: npm audit --production
# Snyk scan
- name: Run Snyk
uses: snyk/actions/node@master
env:
SNYK_TOKEN: ${{ secrets.SNYK_TOKEN }}
```
### 7.2 DAST (Dynamic Application Security Testing)
**OWASP ZAP Automated Scan**:
```bash
# Start application
docker-compose up -d
# Run ZAP baseline scan
docker run -t owasp/zap2docker-stable zap-baseline.py \
-t https://staging.veza.app \
-r zap-report.html
# Run ZAP full scan
docker run -t owasp/zap2docker-stable zap-full-scan.py \
-t https://staging.veza.app \
-r zap-full-report.html
```
### 7.3 Penetration Testing
**Automated Tests**:
```typescript
import { test, expect } from '@playwright/test';
test.describe('Security Tests', () => {
test('should prevent SQL injection', async ({ request }) => {
// Try SQL injection in email field
const response = await request.post('/api/v1/auth/login', {
data: {
email: "admin' OR '1'='1",
password: "anything",
},
});
// Should return 400 validation error, not 200
expect(response.status()).toBe(400);
const data = await response.json();
expect(data.error.code).toBe(4001); // Invalid format
});
test('should prevent XSS', async ({ page }) => {
await page.goto('/register');
// Try XSS in username field
await page.fill('input[name="username"]', '<script>alert("XSS")</script>');
await page.fill('input[name="email"]', 'test@example.com');
await page.fill('input[name="password"]', 'SecurePass123!');
await page.click('button[type="submit"]');
// Should show validation error
await expect(page.locator('text=Invalid username format')).toBeVisible();
});
test('should enforce rate limiting', async ({ request }) => {
// Send 10 requests quickly
const promises = Array.from({ length: 10 }, () =>
request.post('/api/v1/auth/login', {
data: { email: 'test@example.com', password: 'wrong' },
})
);
const responses = await Promise.all(promises);
// At least one should be rate limited (429)
const rateLimited = responses.filter(r => r.status() === 429);
expect(rateLimited.length).toBeGreaterThan(0);
});
});
```
## 8. LOAD & STRESS TESTING
### 8.1 Load Testing Scenarios
**Scenario 1: Normal Load (1000 concurrent users)**:
```javascript
// k6-normal-load.js
export const options = {
vus: 1000,
duration: '10m',
thresholds: {
http_req_duration: ['p(95)<100', 'p(99)<200'],
http_req_failed: ['rate<0.01'],
},
};
export default function () {
// Simulate realistic user behavior
http.get('https://api.veza.app/v1/tracks');
sleep(Math.random() * 3 + 2); // 2-5s
http.get('https://api.veza.app/v1/tracks/track-123');
sleep(Math.random() * 5 + 5); // 5-10s
http.post('https://api.veza.app/v1/tracks/track-123/play');
sleep(Math.random() * 180 + 60); // 1-4min (listening to track)
}
```
**Scenario 2: Peak Load (5000 concurrent users)**:
```javascript
export const options = {
stages: [
{ duration: '5m', target: 5000 },
{ duration: '10m', target: 5000 },
{ duration: '5m', target: 0 },
],
};
```
**Scenario 3: Stress Test (Find breaking point)**:
```javascript
export const options = {
stages: [
{ duration: '2m', target: 1000 },
{ duration: '5m', target: 1000 },
{ duration: '2m', target: 2000 },
{ duration: '5m', target: 2000 },
{ duration: '2m', target: 5000 },
{ duration: '5m', target: 5000 },
{ duration: '2m', target: 10000 }, // Push to breaking point
{ duration: '5m', target: 10000 },
{ duration: '10m', target: 0 },
],
};
```
### 8.2 Spike Testing
```javascript
// k6-spike-test.js
export const options = {
stages: [
{ duration: '1m', target: 100 }, // Normal load
{ duration: '10s', target: 10000 }, // Sudden spike
{ duration: '3m', target: 10000 }, // Stay at spike
{ duration: '10s', target: 100 }, // Drop back
{ duration: '1m', target: 100 }, // Recover
],
};
```
## 9. TEST DATA MANAGEMENT
### 9.1 Test Fixtures
**Go Test Fixtures**:
```go
// testdata/fixtures.go
package testdata
func CreateTestUser() *User {
return &User{
ID: uuid.New().String(),
Email: "test@example.com",
Username: "testuser",
Role: "user",
}
}
func CreateTestTrack() *Track {
return &Track{
ID: uuid.New().String(),
Title: "Test Track",
Artist: "Test Artist",
Genre: "electronic",
DurationSeconds: 240,
}
}
```
**TypeScript Fixtures**:
```typescript
// tests/fixtures/users.ts
export const testUsers = {
normalUser: {
id: 'user-123',
email: 'user@example.com',
username: 'testuser',
role: 'user',
},
premiumUser: {
id: 'user-456',
email: 'premium@example.com',
username: 'premiumuser',
role: 'premium',
},
admin: {
id: 'user-789',
email: 'admin@example.com',
username: 'admin',
role: 'admin',
},
};
```
### 9.2 Test Database Seeding
```sql
-- tests/seed.sql
-- Seed database with test data
INSERT INTO users (id, email, username, password_hash, role)
VALUES
('user-123', 'user@example.com', 'testuser', 'hashed_password', 'user'),
('user-456', 'premium@example.com', 'premiumuser', 'hashed_password', 'premium'),
('user-789', 'admin@example.com', 'admin', 'hashed_password', 'admin');
INSERT INTO tracks (id, user_id, title, artist, genre, duration_seconds)
VALUES
('track-123', 'user-123', 'Test Track 1', 'Test Artist', 'electronic', 240),
('track-456', 'user-123', 'Test Track 2', 'Test Artist', 'rock', 180),
('track-789', 'user-456', 'Premium Track', 'Premium Artist', 'jazz', 300);
```
### 9.3 Cleanup Strategy
```go
func TestMain(m *testing.M) {
// Setup
db := setupTestDB()
// Run tests
code := m.Run()
// Cleanup
db.Exec("TRUNCATE TABLE users CASCADE")
db.Exec("TRUNCATE TABLE tracks CASCADE")
db.Exec("TRUNCATE TABLE playlists CASCADE")
os.Exit(code)
}
func setupTest(t *testing.T, db *gorm.DB) {
t.Cleanup(func() {
db.Exec("DELETE FROM users")
db.Exec("DELETE FROM tracks")
})
}
```
## 10. CI/CD PIPELINE TESTING
### 10.1 GitHub Actions Workflow
```yaml
# .github/workflows/test.yml
name: Test Suite
on:
push:
branches: [main, develop]
pull_request:
branches: [main, develop]
jobs:
unit-tests:
runs-on: ubuntu-latest
timeout-minutes: 10
steps:
- uses: actions/checkout@v3
# Go tests
- name: Setup Go
uses: actions/setup-go@v4
with:
go-version: '1.21'
- name: Run Go tests
run: |
cd veza-backend-api
go test ./... -v -coverprofile=coverage.out
go tool cover -func=coverage.out
- name: Check coverage
run: |
coverage=$(go tool cover -func=coverage.out | grep total | awk '{print $3}' | sed 's/%//')
if (( $(echo "$coverage < 80" | bc -l) )); then
echo "Coverage $coverage% is below 80%"
exit 1
fi
# Rust tests
- name: Setup Rust
uses: actions-rs/toolchain@v1
with:
toolchain: stable
- name: Run Rust tests
run: |
cd veza-chat-server
cargo test --all-features
# TypeScript tests
- name: Setup Node
uses: actions/setup-node@v3
with:
node-version: '20'
- name: Run TypeScript tests
run: |
cd apps/web
npm ci
npm run test -- --coverage
- name: Upload coverage
uses: codecov/codecov-action@v3
with:
files: ./coverage/coverage-final.json
integration-tests:
runs-on: ubuntu-latest
timeout-minutes: 15
services:
postgres:
image: postgres:15-alpine
env:
POSTGRES_PASSWORD: postgres
options: >-
--health-cmd pg_isready
--health-interval 10s
--health-timeout 5s
--health-retries 5
redis:
image: redis:7-alpine
options: >-
--health-cmd "redis-cli ping"
--health-interval 10s
--health-timeout 5s
--health-retries 5
steps:
- uses: actions/checkout@v3
- name: Run integration tests
env:
DATABASE_URL: postgres://postgres:postgres@localhost:5432/test_db
REDIS_URL: redis://localhost:6379
run: |
cd veza-backend-api
go test ./tests/integration/... -v
e2e-tests:
runs-on: ubuntu-latest
timeout-minutes: 20
steps:
- uses: actions/checkout@v3
- name: Install Playwright
run: |
cd apps/web
npm ci
npx playwright install --with-deps
- name: Start services
run: docker-compose up -d
- name: Wait for services
run: |
timeout 60 bash -c 'until curl -f http://localhost:8080/health; do sleep 2; done'
- name: Run E2E tests
run: |
cd apps/web
npx playwright test
- name: Upload test results
if: always()
uses: actions/upload-artifact@v3
with:
name: playwright-report
path: apps/web/playwright-report/
```
## 11. TEST AUTOMATION
### 11.1 Pre-Commit Hooks (Husky)
```json
// package.json
{
"scripts": {
"test": "vitest run",
"test:unit": "vitest run --filter=unit",
"lint": "eslint . --ext .ts,.tsx",
"format": "prettier --write ."
},
"lint-staged": {
"*.{ts,tsx}": [
"eslint --fix",
"prettier --write",
"vitest related --run"
]
},
"husky": {
"hooks": {
"pre-commit": "lint-staged",
"pre-push": "npm test"
}
}
}
```
### 11.2 Test Reporting
**JUnit XML Report**:
```bash
# Go
go test ./... -v 2>&1 | go-junit-report > report.xml
# Vitest
vitest run --reporter=junit --outputFile=report.xml
```
**HTML Coverage Report**:
```bash
# Go
go test ./... -coverprofile=coverage.out
go tool cover -html=coverage.out -o coverage.html
# Vitest
vitest run --coverage
# Opens coverage/index.html
```
## 12. QUALITY GATES
### 12.1 Merge Requirements
**Before PR can be merged**:
- [ ] All tests pass (unit, integration, E2E)
- [ ] Coverage 80% for new/modified files
- [ ] No linter errors (warnings 5)
- [ ] No security vulnerabilities (high/critical)
- [ ] Performance benchmarks pass (no regression > 10%)
- [ ] Code review approved (≥ 2 reviewers)
- [ ] Documentation updated
### 12.2 Release Quality Gates
**Before release to production**:
- [ ] All tests pass (including load tests)
- [ ] Coverage ≥ 80% (overall)
- [ ] Zero known high/critical security vulnerabilities
- [ ] Performance targets met (p95 < 100ms)
- [ ] Load test passed (5000 concurrent users)
- [ ] E2E tests passed (all critical flows)
- [ ] Security scan passed (OWASP ZAP)
- [ ] Smoke tests passed in staging
## 13. TESTS POST-DÉPLOIEMENT (SMOKE TESTS)
### 13.1 Smoke Tests Automatisés
Après chaque déploiement en staging ou production, un ensemble de smoke tests automatisés doit s'exécuter pour valider que les services principaux fonctionnent correctement.
**Script de smoke tests** :
```bash
#!/bin/bash
# scripts/smoke-tests.sh
set -euo pipefail
BASE_URL="${1:-https://api.veza.app}"
FAILURES=0
check() {
local name="$1" url="$2" expected_status="$3"
status=$(curl -s -o /dev/null -w "%{http_code}" "$url")
if [ "$status" != "$expected_status" ]; then
echo "FAIL: $name — expected $expected_status, got $status"
FAILURES=$((FAILURES + 1))
else
echo "PASS: $name"
fi
}
check "Health endpoint" "$BASE_URL/health" "200"
check "API version" "$BASE_URL/v1/version" "200"
check "Tracks listing" "$BASE_URL/v1/tracks" "200"
check "Auth (no token)" "$BASE_URL/v1/me" "401"
check "Discovery endpoint" "$BASE_URL/v1/discover" "200"
check "Stream health" "$BASE_URL/v1/stream/health" "200"
if [ "$FAILURES" -gt 0 ]; then
echo "SMOKE TESTS FAILED: $FAILURES failure(s)"
exit 1
fi
echo "ALL SMOKE TESTS PASSED"
```
### 13.2 Intégration CI/CD
Les smoke tests s'exécutent automatiquement :
- **Post-deploy staging** : bloquant échec = rollback automatique
- **Post-deploy production** : bloquant échec = rollback + alerte PagerDuty
```yaml
# Extrait du workflow deploy-production.yml
- name: Run post-deployment smoke tests
run: |
./scripts/smoke-tests.sh https://api.veza.app
timeout-minutes: 5
- name: Rollback on smoke test failure
if: failure()
run: |
kubectl rollout undo deployment/veza-backend -n veza-production
echo "ROLLBACK TRIGGERED — smoke tests failed"
```
### 13.3 Smoke Tests WebSocket (Chat Server)
```typescript
import WebSocket from 'ws';
async function smokeTestWebSocket(url: string): Promise<void> {
return new Promise((resolve, reject) => {
const ws = new WebSocket(`${url}/ws/health`);
const timeout = setTimeout(() => {
ws.close();
reject(new Error('WebSocket health check timed out'));
}, 5000);
ws.on('open', () => {
clearTimeout(timeout);
ws.close();
resolve();
});
ws.on('error', (err) => {
clearTimeout(timeout);
reject(err);
});
});
}
```
## 14. TESTS DE L'ALGORITHME DE DÉCOUVERTE
### 14.1 Objectif
L'algorithme de découverte de Veza est un élément éthique fondamental : il ne doit **jamais** favoriser les artistes populaires au détriment des artistes émergents. Les tests doivent vérifier cette propriété de manière automatisée.
### 14.2 Tests de Distribution (Go)
```go
func TestDiscovery_DoesNotFavorPopularArtists(t *testing.T) {
db := setupTestDB(t)
// Seed : 10 artistes populaires (>10k plays) et 40 artistes émergents (<100 plays)
for i := 0; i < 10; i++ {
seedArtist(t, db, fmt.Sprintf("popular-%d", i), 10000+rand.Intn(50000))
}
for i := 0; i < 40; i++ {
seedArtist(t, db, fmt.Sprintf("emerging-%d", i), rand.Intn(100))
}
service := NewDiscoveryService(db)
results, err := service.Discover(context.Background(), DiscoveryParams{
Limit: 20,
})
require.NoError(t, err)
require.Len(t, results, 20)
popularCount := 0
emergingCount := 0
for _, track := range results {
if track.PlayCount > 1000 {
popularCount++
} else {
emergingCount++
}
}
// Les artistes émergents (80% du catalogue) doivent représenter
// au moins 50% des résultats de découverte
emergingRatio := float64(emergingCount) / float64(len(results))
assert.GreaterOrEqual(t, emergingRatio, 0.5,
"Discovery must not under-represent emerging artists: got %.0f%% emerging", emergingRatio*100)
// Les artistes populaires ne doivent pas dépasser leur proportion dans le catalogue
popularRatio := float64(popularCount) / float64(len(results))
assert.LessOrEqual(t, popularRatio, 0.5,
"Discovery must not over-represent popular artists: got %.0f%% popular", popularRatio*100)
}
func TestDiscovery_GenreDiversity(t *testing.T) {
db := setupTestDB(t)
genres := []string{"electronic", "rock", "jazz", "hip-hop", "classical"}
for _, genre := range genres {
for i := 0; i < 10; i++ {
seedTrackWithGenre(t, db, genre)
}
}
service := NewDiscoveryService(db)
results, err := service.Discover(context.Background(), DiscoveryParams{Limit: 20})
require.NoError(t, err)
genreSet := map[string]bool{}
for _, track := range results {
genreSet[track.Genre] = true
}
assert.GreaterOrEqual(t, len(genreSet), 3,
"Discovery results must include at least 3 different genres, got %d", len(genreSet))
}
```
### 14.3 Tests de Non-Régression
```go
func TestDiscovery_NoPlayCountBias(t *testing.T) {
db := setupTestDB(t)
// Deux tracks identiques sauf play count
track1 := seedTrack(t, db, "Hidden Gem", 5)
track2 := seedTrack(t, db, "Viral Hit", 1_000_000)
service := NewDiscoveryService(db)
scores := map[string]int{track1.ID: 0, track2.ID: 0}
// Run discovery 100 times, count appearances
for i := 0; i < 100; i++ {
results, _ := service.Discover(context.Background(), DiscoveryParams{Limit: 10})
for _, r := range results {
if _, ok := scores[r.ID]; ok {
scores[r.ID]++
}
}
}
ratio := float64(scores[track2.ID]) / float64(scores[track1.ID]+1)
assert.Less(t, ratio, 3.0,
"Viral track must not appear >3x more than hidden gem in discovery")
}
```
### 14.4 Exécution CI
Les tests de l'algorithme de découverte font partie du job `integration-tests` et s'exécutent à chaque PR et nightly build. Un échec est bloquant.
## 15. STRATÉGIE DE TEST ÉTHIQUE
### 15.1 Principes
Veza refuse l'IA/ML, le Web3/NFT, la gamification addictive, et le tracking publicitaire. Les tests éthiques garantissent ces engagements de manière vérifiable et automatisée.
### 15.2 Tests de Biais Algorithmique
Tous les algorithmes exposant du contenu (découverte, recherche, suggestions) doivent être testés contre les biais suivants :
| Biais | Critère | Seuil |
|-------|---------|-------|
| **Popularité** | Les artistes émergents (<100 plays) doivent apparaître proportionnellement | 50% des résultats |
| **Ancienneté** | Les nouveaux artistes (<30 jours) doivent apparaître | 20% des résultats |
| **Genre** | Au moins 3 genres dans chaque page de découverte | Minimum 3 genres |
| **Géographie** | Pas de biais pays (si donnée disponible) | Aucun pays > 40% |
Ces tests sont exécutés nightly et à chaque PR modifiant les services de découverte ou de recherche.
### 15.3 Tests d'Accessibilité Automatisés (axe-core)
L'accessibilité est testée automatiquement dans le pipeline CI via `@axe-core/playwright` :
```typescript
import { test, expect } from '@playwright/test';
import AxeBuilder from '@axe-core/playwright';
const CRITICAL_PAGES = ['/', '/discover', '/login', '/register', '/upload', '/settings'];
for (const path of CRITICAL_PAGES) {
test(`accessibility audit: ${path}`, async ({ page }) => {
await page.goto(path);
const results = await new AxeBuilder({ page })
.withTags(['wcag2a', 'wcag2aa', 'wcag21aa'])
.analyze();
expect(results.violations).toEqual([]);
});
}
test('accessibility: audio player controls', async ({ page }) => {
await page.goto('/tracks/test-track');
const results = await new AxeBuilder({ page })
.include('.audio-player')
.withTags(['wcag2a', 'wcag2aa'])
.analyze();
expect(results.violations).toEqual([]);
// Keyboard navigation verification
await page.keyboard.press('Tab');
const playButton = page.locator('.audio-player button[aria-label="Play"]');
await expect(playButton).toBeFocused();
});
```
**CI Integration** :
```yaml
# Extrait du workflow test.yml
- name: Run accessibility tests (axe-core)
run: |
cd apps/web
npx playwright test tests/accessibility/ --project=chromium
```
**Critère bloquant** : Zéro violation WCAG 2.1 AA sur les pages critiques. Les violations sont bloquantes en CI.
### 15.4 Tests de Conformité RGPD
```typescript
import { test, expect } from '@playwright/test';
test.describe('GDPR Compliance', () => {
test('user can export all personal data', async ({ request }) => {
const token = await getAuthToken(request, 'gdpr-test-user@example.com');
const exportResponse = await request.post('/api/v1/me/data-export', {
headers: { Authorization: `Bearer ${token}` },
});
expect(exportResponse.status()).toBe(200);
const data = await exportResponse.json();
expect(data).toHaveProperty('user');
expect(data).toHaveProperty('tracks');
expect(data).toHaveProperty('playlists');
expect(data).toHaveProperty('listening_history');
expect(data.user.email).toBe('gdpr-test-user@example.com');
});
test('user can delete account and all associated data', async ({ request }) => {
const token = await getAuthToken(request, 'delete-test-user@example.com');
// Request deletion
const deleteResponse = await request.delete('/api/v1/me', {
headers: { Authorization: `Bearer ${token}` },
});
expect(deleteResponse.status()).toBe(200);
// Verify account is gone
const profileResponse = await request.get('/api/v1/me', {
headers: { Authorization: `Bearer ${token}` },
});
expect(profileResponse.status()).toBe(401);
// Verify associated data is deleted (admin endpoint for test)
const adminToken = await getAdminToken(request);
const dataCheck = await request.get('/api/v1/admin/user-data-check/delete-test-user@example.com', {
headers: { Authorization: `Bearer ${adminToken}` },
});
const remaining = await dataCheck.json();
expect(remaining.tracks).toBe(0);
expect(remaining.playlists).toBe(0);
expect(remaining.messages).toBe(0);
});
test('exported data does not contain other users data', async ({ request }) => {
const token = await getAuthToken(request, 'gdpr-test-user@example.com');
const exportResponse = await request.post('/api/v1/me/data-export', {
headers: { Authorization: `Bearer ${token}` },
});
const data = await exportResponse.json();
const allEmails = JSON.stringify(data);
expect(allEmails).not.toContain('other-user@example.com');
expect(allEmails).not.toContain('admin@example.com');
});
});
```
**Exécution** : Les tests RGPD s'exécutent dans le job E2E, à chaque PR et en nightly build. Un échec est bloquant pour le merge et le déploiement.
### 15.5 Tests Anti-Tracking
```go
func TestAPI_NoTrackingHeaders(t *testing.T) {
router := setupRouter(setupTestDB(t))
endpoints := []string{"/v1/tracks", "/v1/discover", "/v1/users/profile"}
for _, endpoint := range endpoints {
t.Run(endpoint, func(t *testing.T) {
req := httptest.NewRequest("GET", endpoint, nil)
w := httptest.NewRecorder()
router.ServeHTTP(w, req)
// No tracking/fingerprinting headers
assert.Empty(t, w.Header().Get("X-Request-Fingerprint"))
assert.Empty(t, w.Header().Get("X-Device-ID"))
// No third-party tracking cookies
for _, cookie := range w.Result().Cookies() {
assert.NotContains(t, cookie.Name, "_ga")
assert.NotContains(t, cookie.Name, "_fbp")
assert.NotContains(t, cookie.Name, "tracker")
}
})
}
}
```
## ✅ CHECKLIST DE VALIDATION
### Test Coverage
- [ ] Unit tests ≥ 80% line coverage
- [ ] Integration tests cover all API endpoints
- [ ] E2E tests cover all critical user flows
- [ ] Performance tests for all API endpoints
- [ ] Security tests (SAST + DAST)
### Test Quality
- [ ] Tests are fast (unit < 2min, integration < 5min, E2E < 10min)
- [ ] Tests are deterministic (no flaky tests)
- [ ] Tests are isolated (no shared state)
- [ ] Tests are readable (clear AAA structure)
- [ ] Tests are maintainable (DRY, good naming)
### CI/CD Integration
- [ ] Tests run on every commit
- [ ] Coverage reported to Codecov
- [ ] Quality gates enforced
- [ ] Test results visible in PR
- [ ] Failed tests block merge
### Documentation
- [ ] Testing strategy documented
- [ ] Test fixtures documented
- [ ] CI/CD pipeline documented
- [ ] Quality gates documented
## 📊 MÉTRIQUES DE SUCCÈS
### Coverage Metrics
- **Unit Test Coverage**: 80%
- **Integration Test Coverage**: 70%
- **E2E Test Coverage**: 50% (critical flows 100%)
### Performance Metrics
- **Unit Tests**: < 2 minutes
- **Integration Tests**: < 5 minutes
- **E2E Tests**: < 10 minutes (smoke), < 30 minutes (full suite)
- **Load Tests**: < 15 minutes
### Quality Metrics
- **Test Flakiness**: < 1%
- **Bug Escape Rate**: < 0.1% (bugs found in production)
- **Test Maintenance Time**: < 10% of development time
### CI/CD Metrics
- **Build Success Rate**: > 95%
- **Test Execution Time**: < 10 minutes (average)
- **Deployment Frequency**: Multiple per day
- **Mean Time to Recovery (MTTR)**: < 1 hour
## 🔄 HISTORIQUE DES VERSIONS
| Version | Date | Changements |
|---------|------|-------------|
| 1.0.0 | 2025-11-02 | Version initiale - Stratégie de testing complète |
| 2.0.0 | 2026-03-04 | Audit sécurité : ajout tests post-déploiement (smoke tests), tests algorithme de découverte, stratégie de test éthique (biais algorithmique, accessibilité axe-core, conformité RGPD, anti-tracking). Enrichissement tests d'intégration Rust avec DB. |
---
## ⚠️ AVERTISSEMENT
**CETTE STRATÉGIE EST IMMUABLE**
La stratégie de testing définie ici est **VERROUILLÉE**. Toute modification nécessite:
1. **Testing Review Board** (QA Lead, Tech Leads, CTO)
2. **Impact Analysis** (CI/CD, coverage, quality gates)
3. **Team Consensus** (vote à majorité 75%)
4. **Migration Plan** (update tests, CI/CD, docs)
**Seules exceptions autorisées**:
- Nouvelles méthodologies de testing (amélioration)
- Outils de testing plus performants
- Nouvelles exigences de compliance
**Modifications interdites**:
- Réduction coverage minimum (< 80%)
- Suppression quality gates
- Désactivation tests en CI/CD
- Tolérance flaky tests
**"Tests are the safety net that allows rapid development."**
---
**Document créé par**: QA Team + Tech Leads
**Date de création**: 2025-11-02
**Dernière révision**: 2026-03-04 (audit sécurité)
**Prochaine révision**: Quarterly (2026-06-01)
**Propriétaire**: QA Lead
**Statut**: **APPROUVÉ ET VERROUILLÉ**