34 KiB
ORIGIN_QUALITY_METRICS.md
📋 RÉSUMÉ EXÉCUTIF
Ce document définit toutes les métriques de qualité de la plateforme Veza. Il couvre les indicateurs de qualité du code, performance, sécurité, UX, et business avec leurs seuils actuels, trajectoires cibles par phase, quality gates CI/CD, et plan de réduction de la dette technique (DT-001 à DT-017). Ces métriques garantissent le maintien d'un standard de qualité élevé et mesurable.
🎯 OBJECTIFS
Objectif Principal
Établir un système de métriques de qualité complet, mesurable, et actionnable pour garantir l'excellence technique, la satisfaction utilisateur, et le succès business pendant 24 mois.
Objectifs Secondaires
- Détecter les régressions rapidement (< 1 jour)
- Maintenir la dette technique sous contrôle (< 5%)
- Garantir la satisfaction utilisateur (NPS > 50)
- Optimiser les performances (< 100ms p95)
- Sécuriser la plateforme (0 vulnérabilités critiques)
📖 TABLE DES MATIÈRES
- Code Quality Metrics
- Performance Metrics
- Security Metrics
- UX Metrics
- Business Metrics
- Technical Debt Tracking
- Quality Gates
- Review Criteria
- Success Criteria
🔒 RÈGLES IMMUABLES
- Metrics DOIVENT être mesurables (automatiquement, si possible)
- Thresholds DOIVENT être définis (red/yellow/green)
- Dashboards OBLIGATOIRES (Grafana, Datadog, etc.)
- Alerting OBLIGATOIRE (PagerDuty, Slack)
- Weekly review (équipe engineering)
- Monthly report (stakeholders)
- Quality gates BLOQUANTS (CI/CD pipeline)
- Regression tests (automatiques après chaque deployment)
- Metrics retention (1 an minimum)
- Continuous improvement (objectifs trimestriels)
1. CODE QUALITY METRICS
1.1 Test Coverage
Metric: Percentage of code covered by tests
État actuel et trajectoire cible :
| Stack | Actuel (mars 2026) | Phase 3.5 | Phase 4R | Phase 5R |
|---|---|---|---|---|
| Frontend (TypeScript/React) | ~50% | 70% | 75% | 80% |
| Backend Go | Non mesuré | 70% | 80% | 80% |
| Backend Rust | Non mesuré | 60% | 70% | 75% |
Seuils d'alerte :
🔴 Red: < 60% (bloquant en CI à partir de Phase 3.5)
🟡 Yellow: 60-69% (needs improvement)
🟢 Green: ≥ 70% (acceptable)
🌟 Goal: ≥ 80% (cible Phase 5R)
Measurement:
# Go
go test ./... -coverprofile=coverage.out
go tool cover -func=coverage.out
# Rust
cargo tarpaulin --out Html --output-dir coverage/
# TypeScript/React
npm run test:coverage
Reporting:
Tool: Codecov or Coveralls
Frequency: Every commit (CI/CD)
Dashboard: Grafana panel "Test Coverage Trend"
Alert: Coverage drops > 5% → Slack notification
CI Gate: Coverage < seuil phase → PR bloquée (à partir de Phase 3.5)
Action if Below Target:
1. Identify untested code (coverage report)
2. Write missing tests (priority: critical paths)
3. Code review: Reject PRs that decrease coverage
4. Goal: Return to target within 2 sprints
1.2 Cyclomatic Complexity
Metric: Measure of code complexity (number of decision paths)
Targets:
🟢 Green: 1-10 (simple, easy to test)
🟡 Yellow: 11-20 (moderate, needs attention)
🔴 Red: > 20 (complex, refactor recommended)
Measurement:
# Go
gocyclo -over 10 .
# Rust
cargo-complexity
# TypeScript
npx complexity-report src/
Reporting:
Tool: SonarQube
Frequency: Daily (scheduled scan)
Dashboard: "Complexity Hot Spots"
Alert: New functions > 20 → Code review required
Action if Above Target:
1. Refactor function (extract methods)
2. Simplify logic (reduce nested conditions)
3. Split into smaller functions
4. Add unit tests for each path
1.3 Code Duplication
Metric: Percentage of duplicated code
Targets:
🟢 Green: < 3% (acceptable)
🟡 Yellow: 3-5% (needs attention)
🔴 Red: > 5% (unacceptable)
Measurement:
# SonarQube (all languages)
sonar-scanner -Dsonar.projectKey=veza
# jscpd (JavaScript/TypeScript)
npx jscpd src/
Action if Above Target:
1. Identify duplicated blocks (SonarQube report)
2. Extract to shared functions/components
3. Create utility modules
4. Document patterns for reuse
1.4 Code Churn
Metric: Number of times files are modified
Targets:
🟢 Green: < 5 changes/week/file
🟡 Yellow: 5-10 changes/week/file
🔴 Red: > 10 changes/week/file (unstable code)
Measurement:
# Git analysis
git log --pretty=format: --name-only --since="1 week ago" | \
sort | uniq -c | sort -rn | head -20
Reporting:
Tool: Code Climate
Dashboard: "Churn vs Complexity" matrix
Alert: High churn + high complexity → Refactor candidate
Action if Above Target:
1. Investigate reason for high churn
2. Stabilize APIs/interfaces
3. Improve test coverage (prevent regressions)
4. Refactor if needed
1.5 Static Analysis Issues
Metric: Number of linter warnings/errors
Targets:
🟢 Green: 0 errors, 0-5 warnings
🟡 Yellow: 0 errors, 6-10 warnings
🔴 Red: Any errors OR > 10 warnings
Measurement:
# Go
golangci-lint run
# Rust
cargo clippy -- -D warnings
# TypeScript
npm run lint
Reporting:
Tool: GitHub Actions (CI/CD)
Frequency: Every commit
Dashboard: "Lint Issues Trend"
Alert: Any errors → PR blocked
Action if Above Target:
1. Fix all errors (highest priority)
2. Fix warnings (by priority level)
3. Configure linter rules (if too strict)
4. Auto-fix with formatters where possible
2. PERFORMANCE METRICS
2.1 API Response Time
Metric: Time to respond to API requests
Targets:
p50 p95 p99
🟢 Green: <50ms <100ms <500ms
🟡 Yellow: 50-100ms 100-200ms 500ms-1s
🔴 Red: >100ms >200ms >1s
Measurement:
Tool: Prometheus + Grafana
Query:
histogram_quantile(0.95,
rate(http_request_duration_seconds_bucket[5m])
)
Reporting:
Dashboard: "API Performance"
Alert: p95 > 200ms for 5 minutes → PagerDuty alert
Action if Above Target:
1. Identify slow endpoints (APM traces)
2. Optimize database queries (add indexes, reduce N+1)
3. Add caching (Redis)
4. Scale horizontally (more instances)
5. Profile code (pprof for Go, flamegraph for Rust)
2.2 Database Query Performance
Metric: Query execution time
Targets:
p50 p95 p99
🟢 Green: <10ms <50ms <100ms
🟡 Yellow: 10-20ms 50-100ms 100-200ms
🔴 Red: >20ms >100ms >200ms
Measurement:
-- PostgreSQL
SELECT
query,
mean_exec_time,
calls
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 20;
Reporting:
Tool: pg_stat_statements + Datadog
Dashboard: "Slow Queries"
Alert: Query > 100ms executed > 100 times/min → Slack
Action if Above Target:
1. Analyze query plan (EXPLAIN ANALYZE)
2. Add missing indexes
3. Rewrite query (optimize joins)
4. Consider denormalization
5. Add materialized view
2.3 Frontend Performance (Lighthouse Score)
Metric: Google Lighthouse score (0-100)
Targets :
Performance Accessibility Best Practices SEO
🟢 Green: ≥ 90 ≥ 95 ≥ 90 ≥ 90
🟡 Yellow: 80-89 90-94 80-89 80-89
🔴 Red: < 80 < 90 < 80 < 80
Measurement:
# Lighthouse CI
npm install -g @lhci/cli
lhci autorun --collect.url=https://veza.io
Reporting:
Tool: Lighthouse CI + GitHub Actions
Frequency: Every deployment
Dashboard: "Lighthouse Scores Trend"
Alert: Score drops > 10 points → Deployment review
Action if Below Target:
Performance:
- Optimize images (WebP, lazy loading)
- Code splitting (dynamic imports)
- Reduce bundle size (tree shaking)
- Enable CDN caching
- Optimize fonts (preload, subset)
Accessibility:
- Fix color contrast issues
- Add ARIA labels
- Ensure keyboard navigation
- Add alt text to images
Best Practices:
- Use HTTPS
- Remove console.log
- Fix CSP violations
- Update dependencies
SEO:
- Add meta tags
- Improve page titles
- Add structured data
- Fix broken links
2.4 Time to First Byte (TTFB)
Metric: Server response time
Targets:
🟢 Green: < 200ms
🟡 Yellow: 200-500ms
🔴 Red: > 500ms
Measurement:
Tool: Cloudflare Analytics, Datadog RUM
Metric: navigation.timing.responseStart - navigation.timing.requestStart
Action if Above Target:
1. Enable server-side caching
2. Optimize database queries
3. Use CDN (Cloudflare)
4. Enable HTTP/2 or HTTP/3
5. Reduce server processing time
2.5 Streaming Performance
Metric: Audio buffering and startup time
Targets:
Initial buffering: < 1 second
Rebuffering rate: < 0.5%
Audio start time: < 2 seconds
Measurement:
// Client-side measurement
const startTime = performance.now();
audio.addEventListener('canplaythrough', () => {
const loadTime = performance.now() - startTime;
sendMetric('audio_load_time', loadTime);
});
Reporting:
Tool: Custom metrics → Prometheus
Dashboard: "Streaming Performance"
Alert: Rebuffering rate > 1% → Engineering review
Action if Above Target:
1. Optimize audio encoding (adaptive bitrate)
2. Use CDN for audio files
3. Implement prefetching
4. Reduce initial chunk size
5. Monitor CDN performance
3. SECURITY METRICS
3.1 Vulnerability Count
Metric: Number of known vulnerabilities
Targets:
🟢 Green: 0 critical, 0 high
🟡 Yellow: 0 critical, 1-3 high
🔴 Red: Any critical OR > 3 high
Measurement:
# Dependency scanning
npm audit
cargo audit
go list -json -m all | nancy sleuth
Reporting:
Tool: Snyk, Dependabot, Trivy
Frequency: Daily (automated scan)
Dashboard: "Vulnerability Dashboard"
Alert: Critical vulnerability → Immediate fix (< 24h)
Action if Above Target:
1. Review vulnerability details
2. Update dependency (if patch available)
3. Apply workaround (if no patch yet)
4. Remove dependency (if possible)
5. Investigate affected code paths
3.2 Failed Login Attempts
Metric: Number of failed login attempts
Targets:
🟢 Green: < 1% of total logins
🟡 Yellow: 1-5% of total logins
🔴 Red: > 5% of total logins (potential attack)
Measurement:
SELECT
DATE(attempted_at) AS date,
COUNT(*) FILTER (WHERE success = false) AS failed,
COUNT(*) AS total,
ROUND(100.0 * COUNT(*) FILTER (WHERE success = false) / COUNT(*), 2) AS failure_rate
FROM login_attempts
WHERE attempted_at > NOW() - INTERVAL '7 days'
GROUP BY DATE(attempted_at);
Reporting:
Tool: Prometheus + Grafana
Dashboard: "Login Metrics"
Alert: Failure rate > 10% for 5 minutes → Security review
Action if Above Target:
1. Check for credential stuffing attack
2. Implement CAPTCHA (if not already)
3. Rate limit login endpoint
4. Block suspicious IPs (WAF rules)
5. Notify users of suspicious activity
3.3 Mean Time to Remediate (MTTR) Vulnerabilities
Metric: Time from vulnerability discovery to fix deployed
Targets:
🟢 Green: < 7 days (critical), < 30 days (high)
🟡 Yellow: 7-14 days (critical), 30-60 days (high)
🔴 Red: > 14 days (critical), > 60 days (high)
Measurement:
Formula:
MTTR = (Date Fixed - Date Discovered) for all vulnerabilities / Total count
Reporting:
Tool: Jira, GitHub Security Advisories
Dashboard: "Security MTTR"
Review: Monthly security meeting
Action if Above Target:
1. Prioritize security fixes
2. Allocate dedicated time (security sprint)
3. Improve scanning frequency
4. Automate dependency updates (Dependabot)
3.4 Security Incidents
Metric: Number of security incidents
Targets:
🟢 Green: 0 incidents/quarter
🟡 Yellow: 1-2 incidents/quarter (minor)
🔴 Red: Any data breach OR > 2 incidents
Measurement:
Manual tracking: Incident log (spreadsheet or Jira)
Categories: Data breach, DDoS, unauthorized access, etc.
Reporting:
Review: Post-incident review (within 7 days)
Dashboard: "Security Incidents"
Report: Quarterly security report to leadership
Action if Incident Occurs:
1. Incident response plan (containment, eradication)
2. Root cause analysis (RCA)
3. Implement preventive measures
4. Update security documentation
5. Conduct security training
4. UX METRICS
4.1 Core Web Vitals
Largest Contentful Paint (LCP):
🟢 Green: < 2.5s
🟡 Yellow: 2.5-4s
🔴 Red: > 4s
First Input Delay (FID):
🟢 Green: < 100ms
🟡 Yellow: 100-300ms
🔴 Red: > 300ms
Cumulative Layout Shift (CLS):
🟢 Green: < 0.1
🟡 Yellow: 0.1-0.25
🔴 Red: > 0.25
Measurement:
// Real User Monitoring (RUM)
import { getCLS, getFID, getLCP } from 'web-vitals';
getCLS(console.log);
getFID(console.log);
getLCP(console.log);
Reporting:
Tool: Google Analytics, Datadog RUM
Dashboard: "Core Web Vitals"
Alert: Any metric in red zone → Frontend optimization sprint
Action if Below Target:
LCP:
- Optimize server response time
- Preload critical resources
- Optimize images
- Remove render-blocking JS/CSS
FID:
- Break up long tasks
- Use web workers
- Defer non-critical JS
- Code splitting
CLS:
- Add size attributes to images/videos
- Avoid inserting content above existing
- Use CSS transforms (not top/left)
4.2 Accessibility Score (WCAG)
Metric: WCAG 2.1 compliance level
Targets:
🟢 Green: AAA compliance (no violations)
🟡 Yellow: AA compliance (1-5 minor violations)
🔴 Red: < AA compliance (> 5 violations)
Measurement:
# Automated testing
npm install -g @axe-core/cli
axe https://veza.io --save violations.json
Reporting:
Tool: axe DevTools, Lighthouse
Frequency: Every deployment
Dashboard: "Accessibility Violations"
Alert: New violations introduced → Block merge
Action if Below Target:
1. Fix critical violations (color contrast, missing labels)
2. Add ARIA attributes
3. Ensure keyboard navigation
4. Add screen reader support
5. Manual testing with assistive technology
4.3 User Satisfaction (NPS)
Metric: Net Promoter Score
Targets:
🟢 Green: NPS > 50 (excellent)
🟡 Yellow: NPS 30-50 (good)
🔴 Red: NPS < 30 (needs improvement)
Calculation:
NPS = % Promoters (9-10) - % Detractors (0-6)
Example:
50% Promoters, 10% Detractors
NPS = 50 - 10 = 40
Measurement:
Survey: In-app survey (quarterly)
Question: "How likely are you to recommend Veza to a friend?" (0-10)
Sample size: Min 100 responses
Reporting:
Dashboard: "User Satisfaction"
Review: Quarterly product review
Trend: Track NPS changes over time
Action if Below Target:
1. Analyze detractor feedback (qualitative data)
2. Identify top pain points
3. Prioritize fixes (product roadmap)
4. Follow up with detractors (close the loop)
5. Measure improvement (next survey)
4.4 Error Rate (User-Facing)
Metric: Percentage of user actions that result in errors
Targets:
🟢 Green: < 0.1%
🟡 Yellow: 0.1-1%
🔴 Red: > 1%
Measurement:
// Frontend error tracking
window.addEventListener('error', (event) => {
sendErrorToSentry(event.error);
});
// React error boundary
componentDidCatch(error, errorInfo) {
logErrorToService(error, errorInfo);
}
Reporting:
Tool: Sentry, Datadog
Dashboard: "Error Rate"
Alert: Error rate spikes > 5x baseline → Immediate investigation
Action if Above Target:
1. Identify most common errors (Sentry dashboard)
2. Reproduce errors (local testing)
3. Fix root cause
4. Add error handling (graceful degradation)
5. Add monitoring for specific errors
5. BUSINESS METRICS
5.1 Conversion Rate
Metric: Percentage of visitors who complete desired action
Targets:
Sign-up conversion: > 5%
Free → Paid conversion: > 3%
Cart → Purchase: > 10%
Measurement:
Formula:
Conversion Rate = (Conversions / Total Visitors) * 100
Example:
1,000 visitors, 50 sign-ups
Conversion = (50 / 1,000) * 100 = 5%
Reporting:
Tool: Google Analytics, Mixpanel
Dashboard: "Conversion Funnel"
Review: Weekly growth meeting
Action if Below Target:
1. A/B test landing page variants
2. Optimize sign-up flow (reduce friction)
3. Improve value proposition messaging
4. Add social proof (testimonials, reviews)
5. Optimize pricing page
5.2 Customer Lifetime Value (CLV)
Metric: Total revenue from average customer
Targets:
🟢 Green: CLV > $500
🟡 Yellow: CLV $200-$500
🔴 Red: CLV < $200
Calculation:
CLV = Average Purchase Value × Purchase Frequency × Customer Lifespan
Example:
Average purchase: $50
Purchases/year: 4
Lifespan: 3 years
CLV = $50 × 4 × 3 = $600
Reporting:
Dashboard: "Customer Metrics"
Review: Monthly business review
Cohort analysis: Track CLV by acquisition channel
Action if Below Target:
1. Increase purchase frequency (email campaigns)
2. Increase average order value (upselling, bundles)
3. Improve retention (loyalty program)
4. Reduce churn (better onboarding)
5.3 Churn Rate
Metric: Percentage of customers who cancel subscription
Targets:
🟢 Green: < 5%/month
🟡 Yellow: 5-10%/month
🔴 Red: > 10%/month
Calculation:
Monthly Churn Rate = (Customers Lost This Month / Customers at Start of Month) * 100
Example:
Start: 1,000 customers
Lost: 50 customers
Churn = (50 / 1,000) * 100 = 5%
Reporting:
Tool: ChartMogul, internal analytics
Dashboard: "Churn Analysis"
Review: Weekly retention meeting
Action if Above Target:
1. Analyze churn reasons (exit surveys)
2. Identify at-risk users (usage patterns)
3. Implement retention campaigns
4. Improve onboarding (reduce early churn)
5. Add features to increase stickiness
5.4 Monthly Active Users (MAU)
Metric: Number of unique users who engage with platform monthly
Targets:
🟢 Green: MAU growth > 10%/month
🟡 Yellow: MAU growth 0-10%/month
🔴 Red: MAU declining
Calculation:
MAU = COUNT(DISTINCT user_id)
WHERE last_activity_at >= DATE_SUB(NOW(), INTERVAL 30 DAY)
Reporting:
Dashboard: "User Engagement"
Metric: MAU, DAU, DAU/MAU ratio (stickiness)
Review: Weekly growth meeting
Action if Below Target:
1. Improve retention (email re-engagement)
2. Increase acquisition (marketing campaigns)
3. Add viral features (sharing, referrals)
4. Improve product value (new features)
5. Reduce friction (optimize UX)
6. TECHNICAL DEBT TRACKING
6.1 Technical Debt Ratio
Metric: Percentage of development time spent on technical debt
Targets:
🟢 Green: < 5%
🟡 Yellow: 5-10%
🔴 Red: > 10%
Measurement:
Formula:
TD Ratio = (Time Spent on TD / Total Development Time) * 100
Track in Jira:
- Label "technical-debt" on tickets
- Measure time spent (burndown)
Reporting:
Dashboard: "Technical Debt"
Review: Quarterly tech review
Visualization: Debt heat map (by module)
Action if Above Target:
1. Allocate dedicated sprints for debt reduction
2. Prioritize high-impact debt (ROI analysis)
3. Refactor instead of patching
4. Improve code reviews (prevent new debt)
5. Set debt reduction goals (quarterly)
6.2 TODO/FIXME Comments
Metric: Number of TODO/FIXME comments in code
État actuel : 356 TODO/FIXME (audit mars 2026)
Trajectoire cible :
| Métrique | Actuel | Phase 3.5 | Phase 4R | Phase 5R |
|---|---|---|---|---|
| TODO/FIXME | 356 | < 200 | < 50 | < 30 |
Seuils d'alerte :
🟢 Green: < 50 TODOs
🟡 Yellow: 50-100 TODOs
🔴 Red: > 100 TODOs (situation actuelle : 356 🔴)
Measurement:
# Count TODO/FIXME comments
rg -c "TODO|FIXME" --include="*.go" --include="*.rs" --include="*.ts" --include="*.tsx" | awk -F: '{s+=$2} END {print s}'
Reporting:
Tool: SonarQube + script CI dédié
Dashboard: "Code Debt Indicators"
Review: Monthly tech debt review
CI: Tendance trackée, alerte si augmentation > 10 en une semaine
Action plan :
1. Triage initial : convertir chaque TODO/FIXME en issue GitHub avec label "tech-debt"
2. Prioriser par criticité (sécurité > correctness > clean-up)
3. Allouer 20% du temps de sprint à la réduction des TODOs
4. Interdire les nouveaux TODOs sans issue associée
5. Cible Phase 4R : < 50 (réduction de 86%)
6.3 Deprecated Code
Metric: Lines of code marked as deprecated
Targets:
🟢 Green: < 1% of codebase
🟡 Yellow: 1-3% of codebase
🔴 Red: > 3% of codebase
Measurement:
# Count @deprecated annotations
grep -r "@deprecated" --include="*.go" --include="*.rs" --include="*.ts" . | wc -l
Action if Above Target:
1. Plan migration (deprecation timeline)
2. Communicate to users (if public API)
3. Provide alternatives (migration guide)
4. Remove after grace period (6 months)
6.4 Plan de réduction de la dette technique (DT-001 à DT-017)
Les éléments suivants ont été identifiés lors de l'audit technique de mars 2026. Chaque item a un identifiant unique, une priorité et une échéance cible.
| ID | Description | Priorité | Phase cible | Effort estimé |
|---|---|---|---|---|
| DT-001 | Coverage frontend : 50% → 70% (ajouter tests composants critiques) | 🔴 Haute | Phase 3.5 | 3 sprints |
| DT-002 | Coverage Go : mettre en place mesure + atteindre 70% | 🔴 Haute | Phase 3.5 | 2 sprints |
| DT-003 | Coverage Rust : mettre en place tarpaulin + atteindre 60% | 🟡 Moyenne | Phase 3.5 | 2 sprints |
| DT-004 | Réduction TODO/FIXME : 356 → < 200 (triage + conversion en issues) | 🔴 Haute | Phase 3.5 | 2 sprints |
| DT-005 | Réduction TODO/FIXME : < 200 → < 50 | 🟡 Moyenne | Phase 4R | 3 sprints |
| DT-006 | Fichiers > 500 lignes : 139 → < 100 (découpage des plus gros fichiers) | 🟡 Moyenne | Phase 3.5 | 3 sprints |
| DT-007 | Fichiers > 500 lignes : < 100 → < 50 | 🟢 Basse | Phase 5R | 4 sprints |
| DT-008 | Implémenter Lighthouse CI dans GitHub Actions | 🔴 Haute | Phase 3.5 | 1 sprint |
| DT-009 | Implémenter k6 load tests nightly | 🔴 Haute | Phase 3.5 | 1 sprint |
| DT-010 | Activer pg_stat_statements en production | 🔴 Haute | Phase 3.5 | 0.5 sprint |
| DT-011 | Implémenter coverage enforcement CI (bloquant) | 🔴 Haute | Phase 3.5 | 0.5 sprint |
| DT-012 | Supprimer code mort et dépendances inutilisées | 🟡 Moyenne | Phase 4R | 2 sprints |
| DT-013 | Migrer valeurs arbitraires Tailwind vers design tokens | 🟡 Moyenne | Phase 4R | 2 sprints |
| DT-014 | Standardiser gestion d'erreurs (Go + Rust) | 🟡 Moyenne | Phase 4R | 2 sprints |
| DT-015 | Documenter API interne (Go handlers, Rust services) | 🟢 Basse | Phase 5R | 2 sprints |
| DT-016 | Refactorer composants UI > 300 lignes | 🟡 Moyenne | Phase 4R | 3 sprints |
| DT-017 | Implémenter bundle size check CI (< 200KB gzip) | 🔴 Haute | Phase 3.5 | 0.5 sprint |
Règles de gestion :
- Chaque item DT-xxx est tracké comme issue GitHub avec le label
tech-debt - 20% du temps de sprint est alloué à la réduction de dette technique
- Les items 🔴 Haute priorité sont traités en premier
- Revue mensuelle de la progression par l'équipe engineering
- Aucun nouvel item de dette ne peut être introduit sans issue associée
6.5 Fichiers > 500 lignes
Metric: Number of source files exceeding 500 lines
État actuel : 139 fichiers > 500 lignes (audit mars 2026)
Trajectoire cible :
| Métrique | Actuel | Phase 3.5 | Phase 4R | Phase 5R |
|---|---|---|---|---|
| Fichiers > 500 lignes | 139 | < 100 | < 75 | < 50 |
Seuils d'alerte :
🟢 Green: < 50 fichiers
🟡 Yellow: 50-100 fichiers
🔴 Red: > 100 fichiers (situation actuelle : 139 🔴)
Measurement:
rg --files --type ts --type go --type rust | xargs wc -l | awk '$1 > 500 {count++} END {print count " files > 500 lines"}'
Action plan :
1. Identifier les fichiers les plus longs (top 20)
2. Planifier le découpage en sous-modules/composants
3. Appliquer la règle cursorrules : > 300 lignes = découpage obligatoire
4. Chaque sprint : découper au minimum 5 fichiers
5. Nouveaux fichiers : hard limit 300 lignes en code review
7. QUALITY GATES
7.1 CI/CD Quality Gates
Pre-Merge Checks (blocking):
✅ All tests pass (unit, integration)
✅ Test coverage ≥ seuil de la phase courante (bloquant à partir de Phase 3.5)
✅ No linter errors
✅ No security vulnerabilities (critical/high)
✅ Code review approved (1+ reviewer)
✅ No merge conflicts
✅ Lighthouse CI : pas de régression Performance ou Accessibility (bloquant)
Pre-Deployment Checks (blocking):
✅ All CI tests pass
✅ Smoke tests pass (staging)
✅ Performance tests pass (load, stress)
✅ Security scan pass (SAST, DAST)
✅ Database migrations successful
✅ Rollback plan documented
Post-Deployment Checks (monitoring):
✅ Error rate < baseline (+10%)
✅ Response time < baseline (+20%)
✅ CPU/Memory usage < 80%
✅ No critical alerts (30 min observation)
7.1bis Quality Gates à implémenter
Les quality gates suivants doivent être ajoutés au pipeline CI/CD selon le calendrier indiqué :
| Quality Gate | Type | Statut | Échéance |
|---|---|---|---|
| Coverage enforcement CI | Bloquant | À implémenter | Phase 3.5 |
| Lighthouse CI (Performance ≥ 90) | Bloquant sur régression | À implémenter | Phase 3.5 |
| Lighthouse CI (Accessibility ≥ 95) | Bloquant sur régression | À implémenter | Phase 3.5 |
| k6 performance baseline | Informatif | À implémenter | Phase 3.5 |
| k6 performance baseline | Bloquant | Migration | Phase 4R |
| Bundle size check (< 200KB gzip) | Bloquant | À implémenter | Phase 3.5 |
| TODO/FIXME count trend | Informatif | À implémenter | Phase 3.5 |
Détail d'implémentation :
Coverage enforcement CI (bloquant) :
# .github/workflows/coverage.yml
- name: Check coverage threshold
run: |
THRESHOLD=70 # Phase 3.5, sera incrémenté par phase
COVERAGE=$(go tool cover -func=coverage.out | grep total | awk '{print $3}' | sed 's/%//')
if (( $(echo "$COVERAGE < $THRESHOLD" | bc -l) )); then
echo "Coverage $COVERAGE% below threshold $THRESHOLD%"
exit 1
fi
Lighthouse CI (bloquant sur régression) :
# .github/workflows/lighthouse.yml
- name: Lighthouse CI
uses: treosh/lighthouse-ci-action@v10
with:
urls: |
https://staging.veza.app/
https://staging.veza.app/dashboard
budgetPath: ./lighthouse-budget.json
uploadArtifacts: true
# Bloque si Performance < 90 ou Accessibility < 95
k6 performance baseline :
# Phase 3.5 : informatif (rapport uniquement)
# Phase 4R : bloquant (échec si p95 > 100ms)
- name: k6 load test
run: |
k6 run --out json=results.json tests/load/smoke.js
# Analyse results.json, fail si seuils dépassés (Phase 4R+)
7.2 Release Quality Gates
Criteria for Production Release:
✅ All acceptance tests pass
✅ Security audit complete (no critical issues)
✅ Performance benchmarks met
✅ Accessibility audit pass (WCAG AA)
✅ Documentation updated
✅ Release notes written
✅ Rollback tested (staging)
✅ On-call engineer assigned
✅ Stakeholder approval
7.3 Feature Flag Quality Gates
Criteria for Enabling Feature:
✅ Feature tested (QA sign-off)
✅ Monitoring dashboards created
✅ A/B test configured (if applicable)
✅ Gradual rollout plan defined
✅ Rollback procedure documented
✅ User documentation published
8. REVIEW CRITERIA
8.1 Code Review Checklist
Functionality:
- Code does what it's supposed to do
- Edge cases handled
- Error handling complete
Tests:
- Unit tests included
- Tests cover edge cases
- Tests are readable
Code Quality:
- Code follows style guide
- No code duplication
- Functions < 50 lines
- Complexity < 10
Security:
- Input validated
- SQL injection prevented
- XSS prevented
- Secrets not committed
Performance:
- No N+1 queries
- Efficient algorithms
- Caching considered
Documentation:
- Public functions documented
- README updated (if needed)
- Comments explain "why" (not "what")
8.2 Design Review Criteria
UI Design:
- Consistent with design system
- Responsive (mobile, tablet, desktop)
- Accessible (WCAG AA minimum)
- Follows brand guidelines
UX Design:
- User flow intuitive
- Feedback on user actions
- Error states handled
- Loading states shown
8.3 Architecture Review Criteria
Scalability:
- Horizontally scalable
- No single point of failure
- Handles expected load (+ 50%)
Security:
- Follows security best practices
- Data encrypted (at rest, in transit)
- Authentication/authorization correct
Maintainability:
- Clear separation of concerns
- Dependencies documented
- Deployment strategy defined
9. SUCCESS CRITERIA
9.1 Feature Success Criteria
Template:
Feature: [Feature Name]
Success Metrics:
- [ ] Adoption: > [X]% of users try feature within 30 days
- [ ] Engagement: > [Y]% of users who try feature use it weekly
- [ ] Satisfaction: NPS > [Z] (feature-specific survey)
- [ ] Performance: Feature response time < [A]ms (p95)
- [ ] Quality: < [B] critical bugs reported
Business Impact:
- [ ] Increase conversion by [X]%
- [ ] Reduce churn by [Y]%
- [ ] Increase revenue by $[Z]
Timeline:
- Launch: [Date]
- Review: [Date + 30 days]
- Decision: Keep / Iterate / Sunset
Example (Track Collaboration Feature):
Feature: Real-time Track Collaboration
Success Metrics:
- [ ] Adoption: > 10% of creators try collaboration within 30 days
- [ ] Engagement: > 50% of users who try it use it weekly
- [ ] Satisfaction: NPS > 60
- [ ] Performance: < 100ms latency (p95)
- [ ] Quality: < 5 critical bugs
Business Impact:
- [ ] Increase premium subscriptions by 15%
- [ ] Reduce creator churn by 20%
Timeline:
- Launch: 2025-12-01
- Review: 2026-01-01
- Decision: Based on metrics
9.2 Sprint Success Criteria
Definition of Done (Sprint):
✅ All planned stories completed
✅ No critical bugs in production
✅ Test coverage maintained (≥ 80%)
✅ All code reviewed and merged
✅ Documentation updated
✅ Demo prepared (for stakeholders)
✅ Retrospective completed
9.3 Project Success Criteria
Large Project (Multi-Sprint):
✅ All epics completed
✅ Acceptance criteria met
✅ User documentation published
✅ Training materials created (if needed)
✅ Performance benchmarks met
✅ Security audit passed
✅ Stakeholder sign-off
✅ Post-launch monitoring (30 days)
✅ Lessons learned documented
✅ CHECKLIST DE VALIDATION
Metrics Defined
- All metrics have clear definitions
- All metrics have measurable targets (red/yellow/green)
- All metrics have measurement methods
- All metrics have reporting dashboards
- All metrics have alerting configured
Quality Gates
- CI/CD gates configured
- Pre-merge checks automated
- Pre-deployment checks automated
- Post-deployment monitoring active
- Coverage enforcement CI (bloquant, Phase 3.5)
- Lighthouse CI intégré (bloquant sur régression, Phase 3.5)
- k6 performance baseline (informatif Phase 3.5, bloquant Phase 4R)
- Bundle size check (< 200KB gzip, Phase 3.5)
- pg_stat_statements activé en production
Review Process
- Code review checklist documented
- Design review criteria defined
- Architecture review criteria defined
Success Criteria
- Feature success criteria template created
- Sprint DoD defined
- Project success criteria defined
📊 MÉTRIQUES DE SUCCÈS (Meta-Metrics)
Metrics Compliance
- Metrics Coverage: 100% of critical areas have defined metrics
- Metrics Freshness: Dashboards updated real-time (< 5 min lag)
- Alert Response Time: < 15 minutes for critical alerts
- Quality Gate Compliance: 100% of releases pass quality gates
🔄 HISTORIQUE DES VERSIONS
| Version | Date | Changements |
|---|---|---|
| 1.0.0 | 2025-11-02 | Version initiale - Métriques de qualité complètes |
| 2.0.0 | 2026-03-04 | Audit : ajout état actuel vs cibles par phase (coverage frontend 50%→80%, Go/Rust non mesuré→70-80%), TODO/FIXME 356→<50, fichiers >500 lignes 139→<50, quality gates CI à implémenter (coverage bloquant, Lighthouse CI, k6, bundle size), plan dette technique DT-001 à DT-017 |
⚠️ AVERTISSEMENT
CES MÉTRIQUES SONT IMMUABLES
Les métriques de qualité définies ici sont VERROUILLÉES. Toute modification nécessite:
- RFC Quality Metrics Change avec justification
- Approbation VP Engineering
- Update dashboards et alerting
- Communication à l'équipe
Les quality gates sont BLOQUANTS et ne peuvent être contournés.
Document créé par: Engineering Leadership + DevOps
Date de création: 2025-11-02
Dernière révision: 2026-03-04 (v2.0.0)
Prochaine révision: Trimestrielle (2026-06-01)
Propriétaire: VP Engineering
Statut: ✅ APPROUVÉ ET VERROUILLÉ