Compare commits
8 commits
2268b06fc9
...
7385f1e4ed
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
7385f1e4ed | ||
|
|
b875efcffc | ||
|
|
5530267287 | ||
|
|
3a95e38fdf | ||
|
|
f80d46a153 | ||
|
|
4538c6b281 | ||
|
|
3ee16bbe23 | ||
|
|
b7ac65b73d |
38 changed files with 1320 additions and 87 deletions
176
CHANGELOG.md
176
CHANGELOG.md
|
|
@ -1,5 +1,181 @@
|
|||
# Changelog - Veza
|
||||
|
||||
## [v1.0.5] - 2026-04-16
|
||||
|
||||
### Hardening sprint — seven critical-path fixes before public opening
|
||||
|
||||
Audit follow-up on the `register → verify → play` critical path. The app was
|
||||
functional on the surface but broken underneath: the player was silent, emails
|
||||
weren't really sent, the marketplace gave products away in production, the
|
||||
chat silently de-synced across pods, maintenance mode was per-pod only,
|
||||
orphaned tracks accumulated forever in `processing`, and the response cache
|
||||
was corrupting range-aware media responses. Seven targeted fixes, each with
|
||||
its own commit, its own tests, and no behaviour change outside scope.
|
||||
|
||||
#### Fix 1 — Player muet (`veza-backend-api` + `apps/web`)
|
||||
|
||||
- New `GET /api/v1/tracks/:id/stream` handler in
|
||||
`internal/core/track/track_hls_handler.go`. Serves the raw file via
|
||||
`http.ServeContent` — `Range`, `If-Modified-Since` and `If-None-Match`
|
||||
handled for free, so `<audio>` seek works end-to-end.
|
||||
- Route registered in `routes_tracks.go` **unconditionally** (outside the
|
||||
`HLSEnabled` gate) with `OptionalAuth` so both anonymous and authenticated
|
||||
users can stream, and the `share_token` query path keeps working.
|
||||
- Frontend flag `FEATURES.HLS_STREAMING` default flipped from `true` to
|
||||
`false` to match the backend's `HLS_STREAMING` default. The mismatch was
|
||||
the root cause: hls.js was attaching to a 404 manifest and leaving the
|
||||
audio element silent.
|
||||
- All playback URL builders (`feedService`, `discoverService`,
|
||||
`playerService`, `PlayerQueue`, `SharedPlaylistPage`, `TrackSearchResults`,
|
||||
`useLibraryManager`, `useTrackDetailPage`) redirected from `/download` to
|
||||
`/stream`. `/download` remains for explicit downloads.
|
||||
- `useHLSPlayer` — when hls.js emits a fatal non-media error (manifest 404,
|
||||
all network retries exhausted), the hook now destroys hls.js and swaps
|
||||
the audio element onto `/api/v1/tracks/:id/stream` so operators turning
|
||||
HLS on via feature flag don't re-break the player.
|
||||
- Tests: 6 Go unit tests covering invalid UUID, missing track, private-track
|
||||
forbidden, missing file, full body stream, and `206 Partial Content` with
|
||||
`Range: bytes=10-19`. MSW handler and `playerService.test.ts` assertion
|
||||
updated.
|
||||
|
||||
#### Fix 2 — Email verify bidon (`veza-backend-api` + `docker-compose.*`)
|
||||
|
||||
- `core/auth/service.go`: the hard-coded `IsVerified: true` on registration
|
||||
is gone. New users start as `is_verified=false` and the existing
|
||||
`/auth/verify-email` endpoint (unchanged) flips them once they click the
|
||||
link. `TestLogin_EmailNotVerified` now asserts the correct `403`
|
||||
behaviour instead of silently accepting unverified logins.
|
||||
- Registration actually calls `emailService.SendVerificationEmail(...)`
|
||||
(previously the code just `logger.Info("Sending verification email")`
|
||||
without sending). On SMTP failure, the handler returns `500` in
|
||||
production (fail-loud) and logs a warning in development so local
|
||||
sign-ups keep flowing. Same treatment on
|
||||
`password_reset_handler.RequestPasswordReset` — the log-only "don't fail
|
||||
the user message" path is gone in prod.
|
||||
- New helper `isProductionEnv()` centralises the
|
||||
`APP_ENV=="production"` check in both `core/auth` and `handlers`.
|
||||
- `docker-compose.yml` and `docker-compose.dev.yml` now ship MailHog
|
||||
(`mailhog/mailhog:v1.0.1`, SMTP 1025, UI 8025). Backend dev env var
|
||||
`SMTP_HOST=mailhog SMTP_PORT=1025` pre-wired.
|
||||
- Tests: all six `auth` tests adapted to the new async flow
|
||||
(`expectRegister` adds a `SendVerificationEmail` mock, `Login_Success`
|
||||
tests manually flip `is_verified` after `Register` to simulate the click
|
||||
on the verification link).
|
||||
|
||||
#### Fix 3 — Marketplace gratuit (`internal/config/config.go`)
|
||||
|
||||
- `ValidateForEnvironment` now refuses `APP_ENV=production` with
|
||||
`HYPERSWITCH_ENABLED=false`. Without payments enabled, the marketplace
|
||||
flow completes orders as `CREATED` and releases files without charging —
|
||||
effectively free. The guard is loud ("...effectively giving away
|
||||
products. Set HYPERSWITCH_ENABLED=true...") because a silent misconfig
|
||||
here is a revenue leak.
|
||||
- Called at boot from `NewConfig()` line 513 — config validation happens
|
||||
before any HTTP listener starts, so a bad prod config fails fast.
|
||||
- Tests: 3 new cases (`_fails`, `_succeeds`, `non-production is
|
||||
unaffected`) in `validation_test.go`.
|
||||
|
||||
#### Fix 4 — Redis obligatoire multi-pod (`config.go` + `chat_pubsub.go`)
|
||||
|
||||
- Same `ValidateForEnvironment` now requires `REDIS_URL` to be
|
||||
**explicitly** set in production. The struct field has a default
|
||||
(`redis://<appDomain>:6379`) that let misconfigured pods boot against
|
||||
a phantom host and silently degrade to in-memory PubSub — which is
|
||||
fine on one pod and catastrophic on two (chat messages on pod A never
|
||||
reach subscribers on pod B).
|
||||
- `ChatPubSubService` constructor now emits `ERROR` (was silent) when
|
||||
`redisClient` is nil, with a message explicitly naming the failure
|
||||
mode: "cross-instance messages will be lost". Same treatment for
|
||||
`Publish` fallbacks — `Warn` → `Error`, because runbook-worthy.
|
||||
- Tests: `chat_pubsub_test.go` added (constructor log assertion +
|
||||
in-memory fan-out happy path) plus 1 new case in `validation_test.go`.
|
||||
|
||||
#### Fix 5 — Maintenance mode persisté en DB (`middleware/maintenance.go`)
|
||||
|
||||
- Migration `976_platform_settings.sql` introduces a typed key/value
|
||||
table and seeds `maintenance_mode=false`. Column split into
|
||||
`value_bool` / `value_text` so we avoid string parsing in the hot
|
||||
path.
|
||||
- `middleware/maintenance.go` rewritten. `InitMaintenanceMode(db,
|
||||
logger)` wires a DB pool at boot; `MaintenanceModeEnabled()` reads
|
||||
from a 10-second TTL cache and refreshes lazily on the next request.
|
||||
Toggling on one pod propagates to every pod within ~10 s.
|
||||
- Admin endpoint `PUT /api/v1/admin/maintenance` now persists via
|
||||
`INSERT ... ON CONFLICT DO UPDATE` before calling the in-memory
|
||||
setter, so the change survives restarts and is visible cluster-wide.
|
||||
- Tests: new `TestMaintenanceGin_DBBacked` flips the DB row, waits
|
||||
past TTL, and asserts the cache picked up the change. Existing
|
||||
tests preserved.
|
||||
|
||||
#### Fix 6 — Cleanup tracks orphelines (`internal/jobs/`)
|
||||
|
||||
- New `CleanupOrphanTracks` worker. Tracks stuck in `processing` for
|
||||
more than one hour with no file on disk (uploader crashed, container
|
||||
restart during upload, disk wipe) flip to `status=failed` with
|
||||
`status_message = "orphan cleanup: file missing on disk after >1h in
|
||||
processing"`. Never deletes the row, never touches present files or
|
||||
already-failed rows, safe to re-run.
|
||||
- `ScheduleOrphanTracksCleanup(db, logger)` runs once at boot and then
|
||||
hourly — wired in `cmd/api/main.go` alongside the HTTP listener.
|
||||
- Tests: 5 cases in `cleanup_orphan_tracks_test.go` covering the happy
|
||||
path and four negatives (file still present, track too recent, already
|
||||
failed, nil database).
|
||||
|
||||
#### Fix 7 — Response cache corrupting binary media (`middleware/response_cache.go`)
|
||||
|
||||
Surfaced by the v1.0.5 browser smoke test. `ResponseCache` captures the
|
||||
entire body into a `bytes.Buffer`, JSON-serialises it (escaping non-UTF-8
|
||||
bytes) and replays via `c.Data` for subsequent hits. For `/stream`,
|
||||
`/download` and `/hls/*` this had two failure modes:
|
||||
|
||||
1. `Range` headers were never honoured — the cache replayed the full
|
||||
body on every request, stripped `Accept-Ranges`, and left the
|
||||
`<audio>` element unable to seek. A `Range: bytes=100-299` request
|
||||
got back `200 OK` with 48 944 bytes instead of `206` with 200.
|
||||
2. Non-UTF-8 bytes got escaped through the JSON round-trip
|
||||
(`\uFFFD` substitution etc.), corrupting the MP3 payload so even
|
||||
full plays could fail mid-stream (served body MD5 diverged from
|
||||
the source file).
|
||||
|
||||
Fix: skip the cache entirely for any path containing `/stream`,
|
||||
`/download` or `/hls/`, and for any request carrying a `Range` header
|
||||
(belt-and-suspenders for any future media endpoint). All other
|
||||
anonymous GETs keep their 5-minute TTL.
|
||||
|
||||
Live verification after patch:
|
||||
- Full GET: `200 OK`, `Accept-Ranges: bytes`, `Content-Length: 48944`,
|
||||
served body MD5 matches source file byte-for-byte.
|
||||
- Range `100-299`: `206 Partial Content`,
|
||||
`Content-Range: bytes 100-299/48944`, exactly 200 bytes.
|
||||
- Browser `<audio>.play()` succeeds, `currentTime` progresses,
|
||||
`seek(1.5)` accepted (`readyState=4`, no error).
|
||||
|
||||
### Production guards summary
|
||||
|
||||
`config.go:886 Validate()` (base) + `config.go:810 ValidateForEnvironment()`
|
||||
(per-env) — the prod branch now rejects boot if any of:
|
||||
|
||||
- `CORS_ALLOWED_ORIGINS` missing or contains `*`
|
||||
- `LOG_LEVEL=DEBUG`
|
||||
- `CLAMAV_REQUIRED != true`
|
||||
- `CHAT_JWT_SECRET == JWT_SECRET`
|
||||
- `OAUTH_ENCRYPTION_KEY` shorter than 32 bytes
|
||||
- `JWT_ISSUER` / `JWT_AUDIENCE` empty
|
||||
- **`HYPERSWITCH_ENABLED != true`** (new)
|
||||
- **`REDIS_URL` not explicitly set** (new)
|
||||
|
||||
### Known gaps (parked for v1.0.6)
|
||||
|
||||
- Hyperswitch refund path doesn't propagate to PSP
|
||||
- Livestream has no UI feedback when `nginx-rtmp` is down
|
||||
- Upload size mismatch (front 500 MB, back 100 MB)
|
||||
- RabbitMQ silent drop on enqueue failure
|
||||
- `SMTP_HOST` not injected in `make dev` (host-mode ergonomics, not a
|
||||
code bug — the SMTP_HOST env is only wired into the `docker-dev`
|
||||
profile where the backend runs in-container)
|
||||
- Upload route gated by `creator` role with no self-service path to
|
||||
the role — new users can't upload without manual DB escalation
|
||||
|
||||
## [v1.0.4] - 2026-04-15
|
||||
|
||||
### Cleanup sprint — 7 jours de nettoyage post-audit
|
||||
|
|
|
|||
2
VERSION
2
VERSION
|
|
@ -1 +1 @@
|
|||
1.0.4
|
||||
1.0.5
|
||||
|
|
|
|||
|
|
@ -48,10 +48,15 @@ export const FEATURES = {
|
|||
/**
|
||||
* HLS Streaming
|
||||
* Backend endpoints: /api/v1/tracks/:id/hls/info, /api/v1/tracks/:id/hls/status
|
||||
*
|
||||
* Default is `false` to match backend `HLS_STREAMING` env (off by default).
|
||||
* When off, playback goes through `/api/v1/tracks/:id/stream` (MP3 range requests).
|
||||
* Enable via VITE_FEATURE_HLS_STREAMING=true in environments where the backend
|
||||
* transcoder is actually running.
|
||||
*/
|
||||
HLS_STREAMING: parseFeatureEnv(
|
||||
import.meta.env.VITE_FEATURE_HLS_STREAMING,
|
||||
true,
|
||||
false,
|
||||
),
|
||||
|
||||
/**
|
||||
|
|
|
|||
|
|
@ -119,7 +119,7 @@ export function useLibraryManager(
|
|||
artist: track.artist,
|
||||
album: t.album,
|
||||
duration: track.duration,
|
||||
url: t.stream_manifest_url || `/api/v1/tracks/${track.id}/download`,
|
||||
url: t.stream_manifest_url || `/api/v1/tracks/${track.id}/stream`,
|
||||
cover: t.cover_art_path,
|
||||
genre: t.genre,
|
||||
};
|
||||
|
|
|
|||
|
|
@ -29,7 +29,7 @@ function mapApiTrackToPlayerTrack(t: ApiTrack): PlayerTrack {
|
|||
title: t.title,
|
||||
artist: t.artist,
|
||||
duration: t.duration ?? 0,
|
||||
url: apiTrack.stream_manifest_url || `/api/v1/tracks/${t.id}/download`,
|
||||
url: apiTrack.stream_manifest_url || `/api/v1/tracks/${t.id}/stream`,
|
||||
cover: apiTrack.cover_art_path,
|
||||
genre: t.genre,
|
||||
like_count: t.like_count,
|
||||
|
|
@ -44,7 +44,7 @@ function mapSessionItemToPlayerTrack(item: { id: string; track?: { id: string; t
|
|||
title: t.title ?? '',
|
||||
artist: t.artist ?? '',
|
||||
duration: t.duration ?? 0,
|
||||
url: `/api/v1/tracks/${t.id}/download`,
|
||||
url: `/api/v1/tracks/${t.id}/stream`,
|
||||
cover: t.cover_art_path,
|
||||
genre: t.genre,
|
||||
like_count: t.like_count,
|
||||
|
|
|
|||
|
|
@ -242,7 +242,7 @@ describe('playerService', () => {
|
|||
);
|
||||
});
|
||||
|
||||
it('should fallback to direct URL when track has invalid media URL', async () => {
|
||||
it('should fallback to direct stream URL when track has invalid media URL', async () => {
|
||||
const trackWithInvalidUrl = {
|
||||
id: 1,
|
||||
title: 'Test',
|
||||
|
|
@ -252,10 +252,11 @@ describe('playerService', () => {
|
|||
|
||||
await service.loadTrack(trackWithInvalidUrl);
|
||||
|
||||
// When URL is invalid (e.g. 'undefined'), the service falls back to direct download URL
|
||||
// When URL is invalid (e.g. 'undefined'), the service falls back to the
|
||||
// /api/v1/tracks/:id/stream endpoint (always on, Range-aware).
|
||||
const srcAfterFallback = audioElement.src;
|
||||
expect(srcAfterFallback).toContain('/api/v1/tracks/');
|
||||
expect(srcAfterFallback).toContain('/download');
|
||||
expect(srcAfterFallback).toContain('/stream');
|
||||
});
|
||||
});
|
||||
|
||||
|
|
|
|||
|
|
@ -270,11 +270,12 @@ export class AudioPlayerService {
|
|||
private fallbackAttempted = false;
|
||||
|
||||
/**
|
||||
* Build a direct download URL as fallback when HLS streaming fails.
|
||||
* Uses the backend's /api/v1/tracks/:id/download endpoint.
|
||||
* Build a direct streaming URL as fallback when HLS streaming fails.
|
||||
* Uses the backend's /api/v1/tracks/:id/stream endpoint which serves
|
||||
* the raw audio via http.ServeContent (supports Range requests for seeking).
|
||||
*/
|
||||
private static getDirectAudioURL(trackId: string): string {
|
||||
return `/api/v1/tracks/${trackId}/download`;
|
||||
return `/api/v1/tracks/${trackId}/stream`;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
@ -312,7 +313,8 @@ export class AudioPlayerService {
|
|||
this.fallbackAttempted = false;
|
||||
|
||||
if (!AudioPlayerService.isValidMediaUrl(track.url)) {
|
||||
// No HLS URL available — try direct download immediately
|
||||
// No valid playback URL supplied — fall back to the always-on
|
||||
// /api/v1/tracks/:id/stream endpoint.
|
||||
const directUrl = AudioPlayerService.getDirectAudioURL(track.id);
|
||||
this.audioElement.src = directUrl;
|
||||
this.audioElement.load();
|
||||
|
|
|
|||
|
|
@ -28,7 +28,7 @@ function mapToPlayerTrack(t: ApiTrack): PlayerTrack {
|
|||
artist: t.artist,
|
||||
album: t.album,
|
||||
duration: t.duration,
|
||||
url: (t as { stream_manifest_url?: string }).stream_manifest_url || `/api/v1/tracks/${t.id}/download`,
|
||||
url: (t as { stream_manifest_url?: string }).stream_manifest_url || `/api/v1/tracks/${t.id}/stream`,
|
||||
cover: (t as { cover_art_path?: string }).cover_art_path,
|
||||
genre: t.genre,
|
||||
};
|
||||
|
|
|
|||
|
|
@ -71,21 +71,34 @@ export function useHLSPlayer(
|
|||
setState(prev => ({ ...prev, currentLevel: data.level }));
|
||||
});
|
||||
|
||||
hls.on(Hls.Events.ERROR, (_event, data) => {
|
||||
if (data.fatal) {
|
||||
switch (data.type) {
|
||||
case Hls.ErrorTypes.NETWORK_ERROR:
|
||||
hls.startLoad();
|
||||
break;
|
||||
case Hls.ErrorTypes.MEDIA_ERROR:
|
||||
hls.recoverMediaError();
|
||||
break;
|
||||
default:
|
||||
hls.destroy();
|
||||
setState(prev => ({ ...prev, isHLSActive: false }));
|
||||
break;
|
||||
}
|
||||
// Swap the audio element onto the always-on /stream endpoint and free
|
||||
// the hls.js instance. Used whenever HLS has failed in a way that won't
|
||||
// recover — typically a manifest 404 because the backend has HLS off,
|
||||
// or the transcoder is down.
|
||||
const fallbackToDirectStream = () => {
|
||||
const audio = audioRef.current;
|
||||
hls.destroy();
|
||||
hlsRef.current = null;
|
||||
setState(prev => ({ ...prev, isHLSActive: false }));
|
||||
if (audio && trackId) {
|
||||
audio.src = `/api/v1/tracks/${trackId}/stream`;
|
||||
audio.load();
|
||||
}
|
||||
};
|
||||
|
||||
hls.on(Hls.Events.ERROR, (_event, data) => {
|
||||
if (!data.fatal) return;
|
||||
|
||||
// MEDIA errors can still recover (decoder glitch, bad segment).
|
||||
if (data.type === Hls.ErrorTypes.MEDIA_ERROR) {
|
||||
hls.recoverMediaError();
|
||||
return;
|
||||
}
|
||||
|
||||
// Manifest-level network errors will never come back (404 stays 404).
|
||||
// Any other fatal error means hls.js has already exhausted its
|
||||
// internal retries — give up on HLS and play the raw file.
|
||||
fallbackToDirectStream();
|
||||
});
|
||||
|
||||
hlsRef.current = hls;
|
||||
|
|
|
|||
|
|
@ -61,7 +61,7 @@ export function TrackSearchResults({
|
|||
title: track.title,
|
||||
artist: track.artist,
|
||||
duration: track.duration ?? 0,
|
||||
url: `/api/v1/tracks/${track.id}/download`,
|
||||
url: `/api/v1/tracks/${track.id}/stream`,
|
||||
cover: track.cover_art_path,
|
||||
genre: track.genre,
|
||||
like_count: track.like_count,
|
||||
|
|
|
|||
|
|
@ -54,7 +54,7 @@ export function useTrackDetailPage(trackIdOverride?: string) {
|
|||
artist: t.artist,
|
||||
album: t.album,
|
||||
duration: t.duration,
|
||||
url: (t as { stream_manifest_url?: string }).stream_manifest_url || `/api/v1/tracks/${t.id}/download`,
|
||||
url: (t as { stream_manifest_url?: string }).stream_manifest_url || `/api/v1/tracks/${t.id}/stream`,
|
||||
cover: (t as { cover_art_path?: string }).cover_art_path,
|
||||
genre: t.genre,
|
||||
});
|
||||
|
|
|
|||
|
|
@ -298,6 +298,15 @@ export const handlersTracks = [
|
|||
return new HttpResponse(new ArrayBuffer(1024), { headers: { 'Content-Type': 'audio/mpeg' } });
|
||||
}),
|
||||
|
||||
http.get('*/api/v1/tracks/:id/stream', () => {
|
||||
return new HttpResponse(new ArrayBuffer(1024), {
|
||||
headers: {
|
||||
'Content-Type': 'audio/mpeg',
|
||||
'Accept-Ranges': 'bytes',
|
||||
},
|
||||
});
|
||||
}),
|
||||
|
||||
http.post('*/api/v1/tracks/:id/like', () => HttpResponse.json({ success: true })),
|
||||
http.delete('*/api/v1/tracks/:id/like', () => HttpResponse.json({ success: true })),
|
||||
|
||||
|
|
|
|||
|
|
@ -37,7 +37,7 @@ function getTrackPlaybackURL(bt: BackendTrack): string {
|
|||
if (bt.stream_manifest_url) {
|
||||
return String(bt.stream_manifest_url);
|
||||
}
|
||||
return `/api/v1/tracks/${bt.id}/download`;
|
||||
return `/api/v1/tracks/${bt.id}/stream`;
|
||||
}
|
||||
|
||||
function mapBackendTrackToTrack(bt: BackendTrack): Track {
|
||||
|
|
|
|||
|
|
@ -34,14 +34,13 @@ interface BackendTrack {
|
|||
/**
|
||||
* Build the best playback URL for a track:
|
||||
* 1. If backend provides stream_manifest_url (HLS ready) → use it
|
||||
* 2. Otherwise → direct download from backend API (always works if file exists)
|
||||
* 2. Otherwise → /stream endpoint (http.ServeContent, Range-aware)
|
||||
*/
|
||||
function getTrackPlaybackURL(bt: BackendTrack): string {
|
||||
if (bt.stream_manifest_url) {
|
||||
return String(bt.stream_manifest_url);
|
||||
}
|
||||
// Direct audio streaming from backend (no HLS server needed)
|
||||
return `/api/v1/tracks/${bt.id}/download`;
|
||||
return `/api/v1/tracks/${bt.id}/stream`;
|
||||
}
|
||||
|
||||
function mapBackendTrackToTrack(bt: BackendTrack): Track {
|
||||
|
|
|
|||
|
|
@ -111,6 +111,24 @@ services:
|
|||
cpus: '0.5'
|
||||
memory: 1G
|
||||
|
||||
# MailHog - Local SMTP capture (dev only)
|
||||
# Backend points SMTP_HOST=localhost:1025 (when running on host) or mailhog:1025
|
||||
# (container) and outbound mail shows up in the web UI on port 8025.
|
||||
mailhog:
|
||||
image: mailhog/mailhog:v1.0.1
|
||||
container_name: veza_mailhog
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "${PORT_MAILHOG_SMTP:-1025}:1025"
|
||||
- "${PORT_MAILHOG_UI:-8025}:8025"
|
||||
networks:
|
||||
- veza-net
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.10'
|
||||
memory: 64M
|
||||
|
||||
minio:
|
||||
image: minio/minio:latest
|
||||
container_name: veza_minio
|
||||
|
|
|
|||
|
|
@ -57,6 +57,25 @@ services:
|
|||
reservations:
|
||||
memory: 32M
|
||||
|
||||
# MailHog - Local SMTP capture for development
|
||||
# Receives every outbound mail from backend-api (SMTP 1025) and exposes a
|
||||
# web UI (8025) where devs can inspect verification and password-reset
|
||||
# emails without wiring a real SMTP provider.
|
||||
mailhog:
|
||||
image: mailhog/mailhog:v1.0.1
|
||||
container_name: veza_mailhog
|
||||
restart: unless-stopped
|
||||
ports:
|
||||
- "${PORT_MAILHOG_SMTP:-1025}:1025"
|
||||
- "${PORT_MAILHOG_UI:-8025}:8025"
|
||||
networks:
|
||||
- veza-net
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
cpus: '0.10'
|
||||
memory: 64M
|
||||
|
||||
# ClamAV - Virus scanning for uploads
|
||||
# SECURITY(MEDIUM-003): Pin ClamAV image to specific version instead of :latest
|
||||
clamav:
|
||||
|
|
@ -190,6 +209,10 @@ services:
|
|||
- AWS_REGION=us-east-1
|
||||
- HLS_STREAMING=true
|
||||
- HLS_STORAGE_DIR=/data/hls
|
||||
- SMTP_HOST=mailhog
|
||||
- SMTP_PORT=1025
|
||||
- FROM_EMAIL=${FROM_EMAIL:-no-reply@veza.local}
|
||||
- FROM_NAME=${FROM_NAME:-Veza (dev)}
|
||||
volumes:
|
||||
- hls-data:/data/hls
|
||||
ports:
|
||||
|
|
|
|||
|
|
@ -22,6 +22,7 @@ import (
|
|||
"veza-backend-api/internal/config"
|
||||
"veza-backend-api/internal/core/marketplace"
|
||||
vezaes "veza-backend-api/internal/elasticsearch"
|
||||
"veza-backend-api/internal/jobs"
|
||||
"veza-backend-api/internal/metrics"
|
||||
"veza-backend-api/internal/services"
|
||||
"veza-backend-api/internal/shutdown"
|
||||
|
|
@ -276,6 +277,10 @@ func main() {
|
|||
os.Exit(1)
|
||||
}
|
||||
|
||||
// v1.0.4: Hourly cleanup of tracks stuck in `processing` whose upload file
|
||||
// vanished (crash, SIGKILL, disk wipe). Keeps the tracks table honest.
|
||||
jobs.ScheduleOrphanTracksCleanup(db, logger)
|
||||
|
||||
// Configuration du serveur HTTP
|
||||
port := fmt.Sprintf("%d", cfg.AppPort)
|
||||
if cfg.AppPort == 0 {
|
||||
|
|
|
|||
|
|
@ -196,7 +196,12 @@ func (r *APIRouter) Setup(router *gin.Engine) error {
|
|||
|
||||
// Middlewares globaux (after CORS)
|
||||
router.Use(middleware.CacheHeaders(middleware.DefaultCacheHeadersConfig())) // v0.12.4: CDN cache headers
|
||||
router.Use(middleware.MaintenanceGin()) // v0.803 ADM1-03: Maintenance mode (503 except /health, /admin)
|
||||
// v1.0.4: Back the maintenance flag with platform_settings.maintenance_mode
|
||||
// so flipping it on one pod propagates to every other pod within ~10s.
|
||||
if r.db != nil && r.db.GormDB != nil {
|
||||
middleware.InitMaintenanceMode(r.db.GormDB, r.logger)
|
||||
}
|
||||
router.Use(middleware.MaintenanceGin()) // v0.803 ADM1-03: Maintenance mode (503 except /health, /admin)
|
||||
router.Use(middleware.RequestLogger(r.logger)) // Utilisation du structured logger
|
||||
router.Use(middleware.Metrics()) // Prometheus Metrics
|
||||
router.Use(middleware.SentryRecover(r.logger)) // Sentry error tracking
|
||||
|
|
|
|||
|
|
@ -419,7 +419,8 @@ func (r *APIRouter) setupCoreProtectedRoutes(v1 *gin.RouterGroup) {
|
|||
admin.GET("/reports", reportHandler.ListReports)
|
||||
admin.POST("/reports/:id/resolve", reportHandler.ResolveReport)
|
||||
|
||||
// v0.803 ADM1-03: Maintenance mode toggle
|
||||
// v0.803 ADM1-03: Maintenance mode toggle — v1.0.4: persisted via
|
||||
// platform_settings so a toggle on one pod affects every other pod.
|
||||
admin.PUT("/maintenance", func(c *gin.Context) {
|
||||
var req struct {
|
||||
Enabled bool `json:"enabled"`
|
||||
|
|
@ -428,6 +429,21 @@ func (r *APIRouter) setupCoreProtectedRoutes(v1 *gin.RouterGroup) {
|
|||
c.JSON(http.StatusBadRequest, gin.H{"error": "enabled is required"})
|
||||
return
|
||||
}
|
||||
if r.db != nil && r.db.GormDB != nil {
|
||||
if err := r.db.GormDB.WithContext(c.Request.Context()).Exec(
|
||||
`INSERT INTO platform_settings (key, value_bool, description)
|
||||
VALUES ('maintenance_mode', ?, 'When TRUE, all API requests outside the exempt list return 503.')
|
||||
ON CONFLICT (key) DO UPDATE SET value_bool = EXCLUDED.value_bool, updated_at = NOW()`,
|
||||
req.Enabled,
|
||||
).Error; err != nil {
|
||||
r.logger.Error("Failed to persist maintenance flag",
|
||||
zap.Bool("enabled", req.Enabled),
|
||||
zap.Error(err),
|
||||
)
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to persist maintenance flag"})
|
||||
return
|
||||
}
|
||||
}
|
||||
middleware.SetMaintenanceMode(req.Enabled)
|
||||
c.JSON(http.StatusOK, gin.H{"maintenance_mode": req.Enabled})
|
||||
})
|
||||
|
|
|
|||
|
|
@ -114,6 +114,11 @@ func (r *APIRouter) setupTrackRoutes(router *gin.RouterGroup) {
|
|||
tracks.GET("/:id/waveform", trackHandler.GetWaveform)
|
||||
tracks.GET("/:id/history", trackHandler.GetTrackHistory)
|
||||
tracks.GET("/:id/download", trackHandler.DownloadTrack)
|
||||
if r.config.AuthMiddleware != nil {
|
||||
tracks.GET("/:id/stream", r.config.AuthMiddleware.OptionalAuth(), trackHandler.StreamTrack)
|
||||
} else {
|
||||
tracks.GET("/:id/stream", trackHandler.StreamTrack)
|
||||
}
|
||||
tracks.GET("/shared/:token", trackHandler.GetSharedTrack)
|
||||
if r.config.AuthMiddleware != nil {
|
||||
tracks.GET("/:id/repost", r.config.AuthMiddleware.OptionalAuth(), trackHandler.GetRepostStatus) // v0.10.3 F203
|
||||
|
|
|
|||
|
|
@ -861,6 +861,21 @@ func (c *Config) ValidateForEnvironment() error {
|
|||
return fmt.Errorf("JWT_ISSUER and JWT_AUDIENCE must be set in production for consistent JWT validation. Set JWT_ISSUER and JWT_AUDIENCE environment variables")
|
||||
}
|
||||
|
||||
// 9. Hyperswitch must be enabled in production — otherwise the marketplace
|
||||
// silently "sells" products without taking payment (orders complete as
|
||||
// CREATED and files are released for free).
|
||||
if !c.HyperswitchEnabled {
|
||||
return fmt.Errorf("HYPERSWITCH_ENABLED must be true in production. With payments disabled, marketplace orders complete without charging, effectively giving away products. Set HYPERSWITCH_ENABLED=true and configure HYPERSWITCH_API_KEY / HYPERSWITCH_WEBHOOK_SECRET")
|
||||
}
|
||||
|
||||
// 10. REDIS_URL must be *explicitly* set in production. The struct default
|
||||
// (redis://<appDomain>:6379) lets a misconfigured pod start up with
|
||||
// in-memory fallbacks — and in multi-pod deployments that silently
|
||||
// breaks cross-instance PubSub (chat, session revocation, etc.).
|
||||
if strings.TrimSpace(os.Getenv("REDIS_URL")) == "" {
|
||||
return fmt.Errorf("REDIS_URL must be explicitly set in production. A missing value lets the app boot against the default host and silently degrade to in-memory fallbacks that break cross-pod features")
|
||||
}
|
||||
|
||||
case EnvTest:
|
||||
// TEST: Validation adaptée aux tests
|
||||
// CORS peut être vide ou configuré explicitement
|
||||
|
|
|
|||
|
|
@ -638,6 +638,7 @@ func TestLoadConfig_ProdValid(t *testing.T) {
|
|||
originalEnv := os.Getenv("APP_ENV")
|
||||
originalCORSOrigins := os.Getenv("CORS_ALLOWED_ORIGINS")
|
||||
originalClamAV := os.Getenv("CLAMAV_REQUIRED")
|
||||
originalRedisURL := os.Getenv("REDIS_URL")
|
||||
|
||||
// Nettoyer après le test
|
||||
defer func() {
|
||||
|
|
@ -656,11 +657,17 @@ func TestLoadConfig_ProdValid(t *testing.T) {
|
|||
} else {
|
||||
os.Unsetenv("CLAMAV_REQUIRED")
|
||||
}
|
||||
if originalRedisURL != "" {
|
||||
os.Setenv("REDIS_URL", originalRedisURL)
|
||||
} else {
|
||||
os.Unsetenv("REDIS_URL")
|
||||
}
|
||||
}()
|
||||
|
||||
// Configuration pour production valide
|
||||
os.Setenv("APP_ENV", "production")
|
||||
os.Setenv("CLAMAV_REQUIRED", "true")
|
||||
os.Setenv("REDIS_URL", "redis://:password@prod-redis:6379")
|
||||
|
||||
// Créer une config minimale valide (tous les champs requis en prod)
|
||||
cfg := &Config{
|
||||
|
|
@ -671,12 +678,13 @@ func TestLoadConfig_ProdValid(t *testing.T) {
|
|||
JWTIssuer: "veza-api",
|
||||
JWTAudience: "veza-platform",
|
||||
DatabaseURL: "postgresql://test:test@localhost:5432/test_db",
|
||||
RedisURL: "redis://localhost:6379",
|
||||
RedisURL: "redis://:password@prod-redis:6379",
|
||||
AppPort: 8080,
|
||||
LogLevel: "INFO",
|
||||
RateLimitLimit: 100, // Valeur valide pour passer Validate()
|
||||
RateLimitWindow: 60, // Valeur valide pour passer Validate()
|
||||
CORSOrigins: []string{"https://app.veza.com", "https://www.veza.com"}, // Valide - pas de wildcard
|
||||
HyperswitchEnabled: true, // Payments must be on in prod (v1.0.4)
|
||||
}
|
||||
|
||||
// Créer un logger minimal pour la config
|
||||
|
|
|
|||
|
|
@ -389,14 +389,21 @@ func TestValidateNoBypassFlagsInProduction(t *testing.T) {
|
|||
|
||||
// TestValidateForEnvironment_ClamAVRequiredInProduction verifies that CLAMAV_REQUIRED=false fails in production (P1.4)
|
||||
func TestValidateForEnvironment_ClamAVRequiredInProduction(t *testing.T) {
|
||||
orig := os.Getenv("CLAMAV_REQUIRED")
|
||||
origClamAV := os.Getenv("CLAMAV_REQUIRED")
|
||||
origRedis := os.Getenv("REDIS_URL")
|
||||
defer func() {
|
||||
if orig != "" {
|
||||
os.Setenv("CLAMAV_REQUIRED", orig)
|
||||
if origClamAV != "" {
|
||||
os.Setenv("CLAMAV_REQUIRED", origClamAV)
|
||||
} else {
|
||||
os.Unsetenv("CLAMAV_REQUIRED")
|
||||
}
|
||||
if origRedis != "" {
|
||||
os.Setenv("REDIS_URL", origRedis)
|
||||
} else {
|
||||
os.Unsetenv("REDIS_URL")
|
||||
}
|
||||
}()
|
||||
os.Setenv("REDIS_URL", "redis://:password@prod-redis:6379")
|
||||
|
||||
cfg := &Config{
|
||||
Env: EnvProduction,
|
||||
|
|
@ -412,6 +419,7 @@ func TestValidateForEnvironment_ClamAVRequiredInProduction(t *testing.T) {
|
|||
RateLimitWindow: 60,
|
||||
CORSOrigins: []string{"https://example.com"},
|
||||
LogLevel: "INFO",
|
||||
HyperswitchEnabled: true, // v1.0.4: payments must be on in prod
|
||||
}
|
||||
logger, _ := zap.NewDevelopment()
|
||||
cfg.Logger = logger
|
||||
|
|
@ -430,6 +438,124 @@ func TestValidateForEnvironment_ClamAVRequiredInProduction(t *testing.T) {
|
|||
})
|
||||
}
|
||||
|
||||
// TestValidateForEnvironment_HyperswitchRequiredInProduction verifies that
|
||||
// HYPERSWITCH_ENABLED=false fails in production (v1.0.4 marketplace-gratuit fix).
|
||||
func TestValidateForEnvironment_HyperswitchRequiredInProduction(t *testing.T) {
|
||||
origClamAV := os.Getenv("CLAMAV_REQUIRED")
|
||||
origRedis := os.Getenv("REDIS_URL")
|
||||
defer func() {
|
||||
if origClamAV != "" {
|
||||
os.Setenv("CLAMAV_REQUIRED", origClamAV)
|
||||
} else {
|
||||
os.Unsetenv("CLAMAV_REQUIRED")
|
||||
}
|
||||
if origRedis != "" {
|
||||
os.Setenv("REDIS_URL", origRedis)
|
||||
} else {
|
||||
os.Unsetenv("REDIS_URL")
|
||||
}
|
||||
}()
|
||||
os.Setenv("CLAMAV_REQUIRED", "true")
|
||||
os.Setenv("REDIS_URL", "redis://:password@prod-redis:6379")
|
||||
|
||||
baseCfg := func() *Config {
|
||||
c := &Config{
|
||||
Env: EnvProduction,
|
||||
AppPort: 8080,
|
||||
JWTSecret: strings.Repeat("a", 32),
|
||||
ChatJWTSecret: strings.Repeat("b", 32),
|
||||
OAuthEncryptionKey: strings.Repeat("c", 32),
|
||||
JWTIssuer: "veza-api",
|
||||
JWTAudience: "veza-platform",
|
||||
DatabaseURL: "postgresql://user:pass@localhost:5432/db",
|
||||
RedisURL: "redis://localhost:6379",
|
||||
RateLimitLimit: 100,
|
||||
RateLimitWindow: 60,
|
||||
CORSOrigins: []string{"https://example.com"},
|
||||
LogLevel: "INFO",
|
||||
}
|
||||
logger, _ := zap.NewDevelopment()
|
||||
c.Logger = logger
|
||||
return c
|
||||
}
|
||||
|
||||
t.Run("production with HYPERSWITCH_ENABLED=false fails", func(t *testing.T) {
|
||||
cfg := baseCfg()
|
||||
cfg.HyperswitchEnabled = false
|
||||
err := cfg.ValidateForEnvironment()
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "HYPERSWITCH_ENABLED must be true in production")
|
||||
})
|
||||
|
||||
t.Run("production with HYPERSWITCH_ENABLED=true succeeds", func(t *testing.T) {
|
||||
cfg := baseCfg()
|
||||
cfg.HyperswitchEnabled = true
|
||||
err := cfg.ValidateForEnvironment()
|
||||
require.NoError(t, err)
|
||||
})
|
||||
|
||||
t.Run("non-production is unaffected", func(t *testing.T) {
|
||||
cfg := baseCfg()
|
||||
cfg.Env = EnvDevelopment
|
||||
cfg.HyperswitchEnabled = false
|
||||
// Dev doesn't require HyperswitchEnabled — marketplace disabled is fine locally.
|
||||
err := cfg.ValidateForEnvironment()
|
||||
require.NoError(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
// TestValidateForEnvironment_RedisURLRequiredInProduction verifies that an
|
||||
// unset REDIS_URL fails in production (v1.0.4 multi-pod fallback fix).
|
||||
func TestValidateForEnvironment_RedisURLRequiredInProduction(t *testing.T) {
|
||||
origClamAV := os.Getenv("CLAMAV_REQUIRED")
|
||||
origRedis := os.Getenv("REDIS_URL")
|
||||
defer func() {
|
||||
if origClamAV != "" {
|
||||
os.Setenv("CLAMAV_REQUIRED", origClamAV)
|
||||
} else {
|
||||
os.Unsetenv("CLAMAV_REQUIRED")
|
||||
}
|
||||
if origRedis != "" {
|
||||
os.Setenv("REDIS_URL", origRedis)
|
||||
} else {
|
||||
os.Unsetenv("REDIS_URL")
|
||||
}
|
||||
}()
|
||||
os.Setenv("CLAMAV_REQUIRED", "true")
|
||||
|
||||
cfg := &Config{
|
||||
Env: EnvProduction,
|
||||
AppPort: 8080,
|
||||
JWTSecret: strings.Repeat("a", 32),
|
||||
ChatJWTSecret: strings.Repeat("b", 32),
|
||||
OAuthEncryptionKey: strings.Repeat("c", 32),
|
||||
JWTIssuer: "veza-api",
|
||||
JWTAudience: "veza-platform",
|
||||
DatabaseURL: "postgresql://user:pass@localhost:5432/db",
|
||||
RedisURL: "redis://localhost:6379", // struct field is valid (default)
|
||||
RateLimitLimit: 100,
|
||||
RateLimitWindow: 60,
|
||||
CORSOrigins: []string{"https://example.com"},
|
||||
LogLevel: "INFO",
|
||||
HyperswitchEnabled: true,
|
||||
}
|
||||
logger, _ := zap.NewDevelopment()
|
||||
cfg.Logger = logger
|
||||
|
||||
t.Run("production with unset REDIS_URL env fails", func(t *testing.T) {
|
||||
os.Unsetenv("REDIS_URL")
|
||||
err := cfg.ValidateForEnvironment()
|
||||
require.Error(t, err)
|
||||
assert.Contains(t, err.Error(), "REDIS_URL must be explicitly set in production")
|
||||
})
|
||||
|
||||
t.Run("production with REDIS_URL env succeeds", func(t *testing.T) {
|
||||
os.Setenv("REDIS_URL", "redis://:password@prod-redis:6379")
|
||||
err := cfg.ValidateForEnvironment()
|
||||
require.NoError(t, err)
|
||||
})
|
||||
}
|
||||
|
||||
// TestValidateForEnvironment_ChatJWTSecretInProduction verifies CHAT_JWT_SECRET must differ from JWT_SECRET in production (v0.902)
|
||||
func TestValidateForEnvironment_ChatJWTSecretInProduction(t *testing.T) {
|
||||
secret := strings.Repeat("a", 32)
|
||||
|
|
|
|||
|
|
@ -16,10 +16,11 @@ import (
|
|||
"github.com/stretchr/testify/mock"
|
||||
"github.com/stretchr/testify/require"
|
||||
"go.uber.org/zap/zaptest"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
func setupTestAuthHandler(t *testing.T) (*AuthHandler, *gin.Engine, *TestMocks, func()) {
|
||||
service, _, mocks, cleanupService := setupTestAuthService(t)
|
||||
func setupTestAuthHandler(t *testing.T) (*AuthHandler, *gin.Engine, *TestMocks, *gorm.DB, func()) {
|
||||
service, db, mocks, cleanupService := setupTestAuthService(t)
|
||||
|
||||
gin.SetMode(gin.TestMode)
|
||||
router := gin.New()
|
||||
|
|
@ -30,7 +31,7 @@ func setupTestAuthHandler(t *testing.T) (*AuthHandler, *gin.Engine, *TestMocks,
|
|||
zaptest.NewLogger(t),
|
||||
)
|
||||
|
||||
return handler, router, mocks, func() {
|
||||
return handler, router, mocks, db, func() {
|
||||
cleanupService()
|
||||
}
|
||||
}
|
||||
|
|
@ -38,13 +39,14 @@ func setupTestAuthHandler(t *testing.T) (*AuthHandler, *gin.Engine, *TestMocks,
|
|||
func expectRegister(mocks *TestMocks) {
|
||||
mocks.EmailVerification.On("GenerateToken").Return("verification-token", nil).Maybe()
|
||||
mocks.EmailVerification.On("StoreToken", mock.Anything, mock.Anything, "verification-token").Return(nil).Maybe()
|
||||
mocks.Email.On("SendVerificationEmail", mock.AnythingOfType("string"), mock.AnythingOfType("string")).Return(nil).Maybe()
|
||||
mocks.JWT.On("GenerateAccessToken", mock.AnythingOfType("*models.User")).Return("access-token", nil).Once()
|
||||
mocks.JWT.On("GenerateRefreshToken", mock.AnythingOfType("*models.User")).Return("refresh-token", nil).Once()
|
||||
mocks.RefreshToken.On("Store", mock.Anything, "refresh-token", mock.Anything).Return(nil).Once()
|
||||
}
|
||||
|
||||
func TestAuthHandler_Register_Success(t *testing.T) {
|
||||
handler, router, mocks, cleanup := setupTestAuthHandler(t)
|
||||
handler, router, mocks, _, cleanup := setupTestAuthHandler(t)
|
||||
defer cleanup()
|
||||
|
||||
router.POST("/register", handler.Register)
|
||||
|
|
@ -76,7 +78,7 @@ func TestAuthHandler_Register_Success(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestAuthHandler_Login_Success(t *testing.T) {
|
||||
handler, router, mocks, cleanup := setupTestAuthHandler(t)
|
||||
handler, router, mocks, db, cleanup := setupTestAuthHandler(t)
|
||||
defer cleanup()
|
||||
|
||||
router.POST("/login", handler.Login)
|
||||
|
|
@ -86,9 +88,13 @@ func TestAuthHandler_Login_Success(t *testing.T) {
|
|||
|
||||
expectRegister(mocks)
|
||||
|
||||
_, _, err := handler.authService.Register(ctx, "login_h@example.com", "login_h", "StrongPassword123!")
|
||||
registeredUser, _, err := handler.authService.Register(ctx, "login_h@example.com", "login_h", "StrongPassword123!")
|
||||
require.NoError(t, err)
|
||||
|
||||
// Simulate the user clicking the verification link — Register now leaves
|
||||
// is_verified=false and Login refuses unverified users.
|
||||
require.NoError(t, db.Model(&models.User{}).Where("id = ?", registeredUser.ID).Update("is_verified", true).Error)
|
||||
|
||||
reqBody := dto.LoginRequest{
|
||||
Email: "login_h@example.com",
|
||||
Password: "StrongPassword123!",
|
||||
|
|
@ -116,7 +122,7 @@ func TestAuthHandler_Login_Success(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestAuthHandler_Login_InvalidCredentials(t *testing.T) {
|
||||
handler, router, _, cleanup := setupTestAuthHandler(t)
|
||||
handler, router, _, _, cleanup := setupTestAuthHandler(t)
|
||||
defer cleanup()
|
||||
|
||||
router.POST("/login", handler.Login)
|
||||
|
|
@ -137,7 +143,7 @@ func TestAuthHandler_Login_InvalidCredentials(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestAuthHandler_Refresh_Success(t *testing.T) {
|
||||
handler, _, mocks, cleanup := setupTestAuthHandler(t)
|
||||
handler, _, mocks, _, cleanup := setupTestAuthHandler(t)
|
||||
defer cleanup()
|
||||
|
||||
expectRegister(mocks)
|
||||
|
|
@ -176,7 +182,7 @@ func TestAuthHandler_Refresh_Success(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestAuthHandler_CheckUsername_Available(t *testing.T) {
|
||||
handler, _, _, cleanup := setupTestAuthHandler(t)
|
||||
handler, _, _, _, cleanup := setupTestAuthHandler(t)
|
||||
defer cleanup()
|
||||
|
||||
w := httptest.NewRecorder()
|
||||
|
|
@ -196,7 +202,7 @@ func TestAuthHandler_CheckUsername_Available(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestAuthHandler_GetMe_Success(t *testing.T) {
|
||||
handler, _, mocks, cleanup := setupTestAuthHandler(t)
|
||||
handler, _, mocks, _, cleanup := setupTestAuthHandler(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
|
|
@ -223,7 +229,7 @@ func TestAuthHandler_GetMe_Success(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestAuthHandler_Logout_Success(t *testing.T) {
|
||||
handler, _, mocks, cleanup := setupTestAuthHandler(t)
|
||||
handler, _, mocks, _, cleanup := setupTestAuthHandler(t)
|
||||
defer cleanup()
|
||||
|
||||
ctx := context.Background()
|
||||
|
|
|
|||
|
|
@ -197,7 +197,7 @@ func (s *AuthService) Register(ctx context.Context, email, username, password st
|
|||
PasswordHash: string(hashedPassword),
|
||||
Role: "user", // Valeur par défaut (doit correspondre à l'ENUM PostgreSQL)
|
||||
IsActive: true, // Valeur par défaut
|
||||
IsVerified: true, // MVP: Auto-verify email pour permettre login immédiat
|
||||
IsVerified: false, // Verified via POST /auth/verify-email after the user clicks the link
|
||||
IsBanned: false, // Valeur par défaut (required NOT NULL field)
|
||||
TokenVersion: 0, // Valeur par défaut (required NOT NULL field)
|
||||
LoginCount: 0, // Valeur par défaut (required NOT NULL field)
|
||||
|
|
@ -354,27 +354,62 @@ func (s *AuthService) Register(ctx context.Context, email, username, password st
|
|||
)
|
||||
}
|
||||
|
||||
// Générer le token de vérification d'email (non-bloquant)
|
||||
// Si la génération échoue, on continue quand même avec l'inscription
|
||||
// L'utilisateur pourra demander un nouveau token plus tard
|
||||
// Generate the verification token and dispatch the email.
|
||||
// In production an SMTP failure is a hard error — we refuse to create the
|
||||
// account silently, otherwise the user ends up stuck with no way to verify.
|
||||
// In development we log a warning so local sign-ups continue to work even
|
||||
// when MailHog/SMTP is not running (dev can verify by reading the log).
|
||||
if s.emailVerificationService != nil {
|
||||
token, err := s.emailVerificationService.GenerateToken()
|
||||
if err != nil {
|
||||
s.logger.Warn("Failed to generate email verification token (non-blocking)", zap.Error(err))
|
||||
} else {
|
||||
// Stocker le token
|
||||
if err := s.emailVerificationService.StoreToken(user.ID, user.Email, token); err != nil {
|
||||
s.logger.Warn("Failed to store email verification token (non-blocking)", zap.Error(err))
|
||||
} else {
|
||||
// Envoyer l'email de vérification (simulation pour l'instant)
|
||||
s.logger.Info("Sending verification email",
|
||||
token, tokenErr := s.emailVerificationService.GenerateToken()
|
||||
if tokenErr != nil {
|
||||
s.logger.Error("Failed to generate email verification token",
|
||||
zap.String("user_id", user.ID.String()),
|
||||
zap.Error(tokenErr),
|
||||
)
|
||||
if isProductionEnv() {
|
||||
return nil, nil, fmt.Errorf("failed to generate verification token: %w", tokenErr)
|
||||
}
|
||||
} else if storeErr := s.emailVerificationService.StoreToken(user.ID, user.Email, token); storeErr != nil {
|
||||
s.logger.Error("Failed to store email verification token",
|
||||
zap.String("user_id", user.ID.String()),
|
||||
zap.Error(storeErr),
|
||||
)
|
||||
if isProductionEnv() {
|
||||
return nil, nil, fmt.Errorf("failed to store verification token: %w", storeErr)
|
||||
}
|
||||
} else if s.emailService != nil {
|
||||
if sendErr := s.emailService.SendVerificationEmail(user.Email, token); sendErr != nil {
|
||||
s.logger.Error("Failed to send verification email",
|
||||
zap.String("user_id", user.ID.String()),
|
||||
zap.String("email", user.Email),
|
||||
zap.Error(sendErr),
|
||||
)
|
||||
if isProductionEnv() {
|
||||
return nil, nil, fmt.Errorf("failed to send verification email: %w", sendErr)
|
||||
}
|
||||
s.logger.Warn("Continuing registration in non-production mode despite SMTP failure — verify via token in logs or MailHog UI (http://localhost:8025)",
|
||||
zap.String("token", token),
|
||||
zap.String("user_id", user.ID.String()))
|
||||
)
|
||||
} else {
|
||||
s.logger.Info("Verification email sent",
|
||||
zap.String("user_id", user.ID.String()),
|
||||
zap.String("email", user.Email),
|
||||
)
|
||||
}
|
||||
} else {
|
||||
s.logger.Warn("Email service not wired — verification email not sent",
|
||||
zap.String("user_id", user.ID.String()),
|
||||
zap.String("token", token),
|
||||
)
|
||||
if isProductionEnv() {
|
||||
return nil, nil, fmt.Errorf("email service unavailable in production")
|
||||
}
|
||||
}
|
||||
} else {
|
||||
s.logger.Warn("Email verification service not available - skipping token generation")
|
||||
if isProductionEnv() {
|
||||
return nil, nil, fmt.Errorf("email verification service unavailable in production")
|
||||
}
|
||||
}
|
||||
|
||||
s.logger.Info("User registered successfully", zap.String("user_id", user.ID.String()))
|
||||
|
|
@ -1020,3 +1055,9 @@ func min(a, b int) int {
|
|||
}
|
||||
return b
|
||||
}
|
||||
|
||||
// isProductionEnv reports whether APP_ENV is set to "production". Used to
|
||||
// decide whether SMTP / email-delivery failures should be hard errors.
|
||||
func isProductionEnv() bool {
|
||||
return strings.EqualFold(os.Getenv("APP_ENV"), "production")
|
||||
}
|
||||
|
|
|
|||
|
|
@ -313,7 +313,7 @@ func TestAuthService_Logout(t *testing.T) {
|
|||
}
|
||||
|
||||
func TestAuthService_Login_Success(t *testing.T) {
|
||||
service, _, mocks, cleanup := setupTestAuthService(t)
|
||||
service, db, mocks, cleanup := setupTestAuthService(t)
|
||||
defer cleanup()
|
||||
ctx := context.Background()
|
||||
|
||||
|
|
@ -340,10 +340,15 @@ func TestAuthService_Login_Success(t *testing.T) {
|
|||
mocks.RefreshToken.On("Store", mock.AnythingOfType("uuid.UUID"), "refresh-token", mock.Anything).Return(nil).Once()
|
||||
mocks.EmailVerification.On("GenerateToken").Return("verify-token", nil).Once()
|
||||
mocks.EmailVerification.On("StoreToken", mock.AnythingOfType("uuid.UUID"), email, "verify-token").Return(nil).Once()
|
||||
mocks.Email.On("SendVerificationEmail", email, "verify-token").Return(nil).Once()
|
||||
|
||||
user, _, err := service.Register(ctx, email, "loginuser", password)
|
||||
require.NoError(t, err)
|
||||
|
||||
// Simulate the user clicking the verification link — Register now leaves
|
||||
// is_verified=false (v1.0.4 hardening) and Login refuses unverified users.
|
||||
require.NoError(t, db.Model(&models.User{}).Where("id = ?", user.ID).Update("is_verified", true).Error)
|
||||
|
||||
// Now Login
|
||||
// Login also needs JWT generation expectations
|
||||
mocks.JWT.On("GenerateAccessToken", mock.AnythingOfType("*models.User")).Return("new-access-token", nil).Once()
|
||||
|
|
|
|||
|
|
@ -6,6 +6,8 @@ import (
|
|||
"fmt"
|
||||
"net/http"
|
||||
"net/http/httptest"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
|
|
@ -228,6 +230,172 @@ func TestTrackHandler_DownloadTrack_NotFound(t *testing.T) {
|
|||
assert.Equal(t, http.StatusNotFound, w.Code)
|
||||
}
|
||||
|
||||
// TestTrackHandler_StreamTrack_InvalidID tests StreamTrack with a non-UUID param.
|
||||
func TestTrackHandler_StreamTrack_InvalidID(t *testing.T) {
|
||||
handler, _, router, cleanup := setupTestTrackHandler(t)
|
||||
defer cleanup()
|
||||
|
||||
router.GET("/tracks/:id/stream", handler.StreamTrack)
|
||||
|
||||
req := httptest.NewRequest(http.MethodGet, "/tracks/not-a-uuid/stream", nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusBadRequest, w.Code)
|
||||
}
|
||||
|
||||
// TestTrackHandler_StreamTrack_NotFound tests StreamTrack when the track does not exist.
|
||||
func TestTrackHandler_StreamTrack_NotFound(t *testing.T) {
|
||||
handler, _, router, cleanup := setupTestTrackHandler(t)
|
||||
defer cleanup()
|
||||
|
||||
router.GET("/tracks/:id/stream", handler.StreamTrack)
|
||||
|
||||
nonExistentID := uuid.New()
|
||||
req := httptest.NewRequest(http.MethodGet, fmt.Sprintf("/tracks/%s/stream", nonExistentID.String()), nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusNotFound, w.Code)
|
||||
}
|
||||
|
||||
// TestTrackHandler_StreamTrack_PrivateForbidden ensures anonymous users cannot stream
|
||||
// a private track even when the file exists on disk.
|
||||
func TestTrackHandler_StreamTrack_PrivateForbidden(t *testing.T) {
|
||||
handler, db, router, cleanup := setupTestTrackHandler(t)
|
||||
defer cleanup()
|
||||
|
||||
ownerID := uuid.New()
|
||||
user := &models.User{
|
||||
ID: ownerID,
|
||||
Username: "owner",
|
||||
Email: "owner@example.com",
|
||||
}
|
||||
require.NoError(t, db.Create(user).Error)
|
||||
|
||||
trackID := uuid.New()
|
||||
track := createTestTrack(trackID, ownerID)
|
||||
require.NoError(t, db.Create(track).Error)
|
||||
// gorm:"default:true" on IsPublic means we can't persist `false` through the
|
||||
// struct path (zero-value is indistinguishable from unset) — flip it with Update.
|
||||
require.NoError(t, db.Model(&models.Track{}).Where("id = ?", trackID).Update("is_public", false).Error)
|
||||
|
||||
router.GET("/tracks/:id/stream", handler.StreamTrack)
|
||||
|
||||
req := httptest.NewRequest(http.MethodGet, fmt.Sprintf("/tracks/%s/stream", trackID.String()), nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusForbidden, w.Code)
|
||||
}
|
||||
|
||||
// TestTrackHandler_StreamTrack_MissingFile verifies that a track row with no matching
|
||||
// file on disk surfaces as 404 instead of a generic 500.
|
||||
func TestTrackHandler_StreamTrack_MissingFile(t *testing.T) {
|
||||
handler, db, router, cleanup := setupTestTrackHandler(t)
|
||||
defer cleanup()
|
||||
|
||||
ownerID := uuid.New()
|
||||
user := &models.User{
|
||||
ID: ownerID,
|
||||
Username: "owner",
|
||||
Email: "owner@example.com",
|
||||
}
|
||||
require.NoError(t, db.Create(user).Error)
|
||||
|
||||
trackID := uuid.New()
|
||||
track := createTestTrack(trackID, ownerID)
|
||||
track.IsPublic = true
|
||||
track.FilePath = filepath.Join(t.TempDir(), "does-not-exist.mp3")
|
||||
require.NoError(t, db.Create(track).Error)
|
||||
|
||||
router.GET("/tracks/:id/stream", handler.StreamTrack)
|
||||
|
||||
req := httptest.NewRequest(http.MethodGet, fmt.Sprintf("/tracks/%s/stream", trackID.String()), nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusNotFound, w.Code)
|
||||
}
|
||||
|
||||
// TestTrackHandler_StreamTrack_FullBody streams a real file end-to-end and asserts
|
||||
// that the range-aware streaming headers are present.
|
||||
func TestTrackHandler_StreamTrack_FullBody(t *testing.T) {
|
||||
handler, db, router, cleanup := setupTestTrackHandler(t)
|
||||
defer cleanup()
|
||||
|
||||
ownerID := uuid.New()
|
||||
user := &models.User{
|
||||
ID: ownerID,
|
||||
Username: "owner",
|
||||
Email: "owner@example.com",
|
||||
}
|
||||
require.NoError(t, db.Create(user).Error)
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
filePath := filepath.Join(tmpDir, "sample.mp3")
|
||||
payload := bytes.Repeat([]byte{0xAB}, 4096)
|
||||
require.NoError(t, os.WriteFile(filePath, payload, 0o600))
|
||||
|
||||
trackID := uuid.New()
|
||||
track := createTestTrack(trackID, ownerID)
|
||||
track.IsPublic = true
|
||||
track.FilePath = filePath
|
||||
require.NoError(t, db.Create(track).Error)
|
||||
|
||||
router.GET("/tracks/:id/stream", handler.StreamTrack)
|
||||
|
||||
req := httptest.NewRequest(http.MethodGet, fmt.Sprintf("/tracks/%s/stream", trackID.String()), nil)
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusOK, w.Code)
|
||||
assert.Equal(t, "audio/mpeg", w.Header().Get("Content-Type"))
|
||||
assert.Equal(t, "bytes", w.Header().Get("Accept-Ranges"))
|
||||
assert.Equal(t, len(payload), w.Body.Len())
|
||||
}
|
||||
|
||||
// TestTrackHandler_StreamTrack_RangeRequest verifies that http.ServeContent honors
|
||||
// the Range header — this is what enables seeking in a <audio> element.
|
||||
func TestTrackHandler_StreamTrack_RangeRequest(t *testing.T) {
|
||||
handler, db, router, cleanup := setupTestTrackHandler(t)
|
||||
defer cleanup()
|
||||
|
||||
ownerID := uuid.New()
|
||||
user := &models.User{
|
||||
ID: ownerID,
|
||||
Username: "owner",
|
||||
Email: "owner@example.com",
|
||||
}
|
||||
require.NoError(t, db.Create(user).Error)
|
||||
|
||||
tmpDir := t.TempDir()
|
||||
filePath := filepath.Join(tmpDir, "sample.mp3")
|
||||
payload := make([]byte, 256)
|
||||
for i := range payload {
|
||||
payload[i] = byte(i)
|
||||
}
|
||||
require.NoError(t, os.WriteFile(filePath, payload, 0o600))
|
||||
|
||||
trackID := uuid.New()
|
||||
track := createTestTrack(trackID, ownerID)
|
||||
track.IsPublic = true
|
||||
track.FilePath = filePath
|
||||
require.NoError(t, db.Create(track).Error)
|
||||
|
||||
router.GET("/tracks/:id/stream", handler.StreamTrack)
|
||||
|
||||
req := httptest.NewRequest(http.MethodGet, fmt.Sprintf("/tracks/%s/stream", trackID.String()), nil)
|
||||
req.Header.Set("Range", "bytes=10-19")
|
||||
w := httptest.NewRecorder()
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
assert.Equal(t, http.StatusPartialContent, w.Code)
|
||||
assert.Equal(t, "bytes 10-19/256", w.Header().Get("Content-Range"))
|
||||
assert.Equal(t, 10, w.Body.Len())
|
||||
assert.Equal(t, payload[10:20], w.Body.Bytes())
|
||||
}
|
||||
|
||||
// TestTrackHandler_CreateShare tests CreateShare handler
|
||||
func TestTrackHandler_CreateShare_Success(t *testing.T) {
|
||||
handler, db, router, cleanup := setupTestTrackHandler(t)
|
||||
|
|
|
|||
|
|
@ -172,6 +172,100 @@ func (h *TrackHandler) DownloadTrack(c *gin.Context) {
|
|||
c.File(track.FilePath)
|
||||
}
|
||||
|
||||
// StreamTrack serves raw track audio via HTTP range requests for <audio> elements.
|
||||
// Unlike /download (which sets Content-Disposition: inline) and /hls/* (which is gated
|
||||
// by HLSEnabled), /stream is always available and is the default playback path when
|
||||
// HLS transcoding is off. The file is served via http.ServeContent which handles
|
||||
// Range, If-Modified-Since and If-None-Match automatically.
|
||||
func (h *TrackHandler) StreamTrack(c *gin.Context) {
|
||||
var userID uuid.UUID
|
||||
if userIDInterface, exists := c.Get("user_id"); exists {
|
||||
if uid, ok := userIDInterface.(uuid.UUID); ok {
|
||||
userID = uid
|
||||
}
|
||||
}
|
||||
|
||||
trackIDStr := c.Param("id")
|
||||
if trackIDStr == "" {
|
||||
h.respondWithError(c, http.StatusBadRequest, "track id is required")
|
||||
return
|
||||
}
|
||||
trackID, err := uuid.Parse(trackIDStr)
|
||||
if err != nil {
|
||||
h.respondWithError(c, http.StatusBadRequest, "invalid track id")
|
||||
return
|
||||
}
|
||||
|
||||
track, err := h.trackService.GetTrackByID(c.Request.Context(), trackID)
|
||||
if err != nil {
|
||||
if errors.Is(err, ErrTrackNotFound) || errors.Is(err, gorm.ErrRecordNotFound) {
|
||||
h.respondWithError(c, http.StatusNotFound, "track not found")
|
||||
return
|
||||
}
|
||||
h.respondWithError(c, http.StatusInternalServerError, "failed to get track")
|
||||
return
|
||||
}
|
||||
|
||||
if shareToken := c.Query("share_token"); shareToken != "" {
|
||||
if h.shareService == nil {
|
||||
h.respondWithError(c, http.StatusInternalServerError, "share service not available")
|
||||
return
|
||||
}
|
||||
share, shareErr := h.shareService.ValidateShareToken(c.Request.Context(), shareToken)
|
||||
if shareErr != nil {
|
||||
if errors.Is(shareErr, services.ErrShareNotFound) {
|
||||
h.respondWithError(c, http.StatusForbidden, "invalid share token")
|
||||
return
|
||||
}
|
||||
if errors.Is(shareErr, services.ErrShareExpired) {
|
||||
h.respondWithError(c, http.StatusForbidden, "share link expired")
|
||||
return
|
||||
}
|
||||
h.respondWithError(c, http.StatusInternalServerError, "failed to validate share token")
|
||||
return
|
||||
}
|
||||
if share.TrackID != trackID {
|
||||
h.respondWithError(c, http.StatusForbidden, "invalid share token")
|
||||
return
|
||||
}
|
||||
} else if !track.IsPublic && track.UserID != userID {
|
||||
h.respondWithError(c, http.StatusForbidden, "forbidden")
|
||||
return
|
||||
}
|
||||
|
||||
file, err := os.Open(track.FilePath)
|
||||
if err != nil {
|
||||
if os.IsNotExist(err) {
|
||||
h.respondWithError(c, http.StatusNotFound, "track file not found")
|
||||
return
|
||||
}
|
||||
h.trackService.logger.Error("failed to open track file for streaming",
|
||||
zap.Error(err),
|
||||
zap.String("track_id", trackID.String()),
|
||||
zap.String("file_path", track.FilePath),
|
||||
)
|
||||
h.respondWithError(c, http.StatusInternalServerError, "failed to open track file")
|
||||
return
|
||||
}
|
||||
defer file.Close()
|
||||
|
||||
stat, err := file.Stat()
|
||||
if err != nil {
|
||||
h.trackService.logger.Error("failed to stat track file",
|
||||
zap.Error(err),
|
||||
zap.String("track_id", trackID.String()),
|
||||
)
|
||||
h.respondWithError(c, http.StatusInternalServerError, "failed to stat track file")
|
||||
return
|
||||
}
|
||||
|
||||
c.Header("Content-Type", getContentType(track.Format))
|
||||
c.Header("Accept-Ranges", "bytes")
|
||||
c.Header("Cache-Control", "private, max-age=3600")
|
||||
|
||||
http.ServeContent(c.Writer, c.Request, track.Title, stat.ModTime(), file)
|
||||
}
|
||||
|
||||
// getContentType retourne le Content-Type approprié pour un format audio
|
||||
func getContentType(format string) string {
|
||||
switch strings.ToUpper(format) {
|
||||
|
|
|
|||
|
|
@ -211,7 +211,7 @@ func TestLogin_EmailNotVerified(t *testing.T) {
|
|||
user, _, err := authService.Register(ctx, "test@example.com", "testuser", "SecurePassword123!")
|
||||
require.NoError(t, err)
|
||||
require.NotNil(t, user)
|
||||
// User is not verified by default
|
||||
// User is not verified by default (v1.0.4: Register leaves is_verified=false).
|
||||
|
||||
reqBody := dto.LoginRequest{
|
||||
Email: "test@example.com",
|
||||
|
|
@ -226,8 +226,8 @@ func TestLogin_EmailNotVerified(t *testing.T) {
|
|||
|
||||
router.ServeHTTP(w, req)
|
||||
|
||||
// Current behavior: unverified users can login (StatusOK). Product may require StatusForbidden in future.
|
||||
assert.Equal(t, http.StatusOK, w.Code)
|
||||
// Login refuses unverified users; the user must POST /auth/verify-email first.
|
||||
assert.Equal(t, http.StatusForbidden, w.Code)
|
||||
}
|
||||
|
||||
func TestLogin_Requires2FA(t *testing.T) {
|
||||
|
|
|
|||
|
|
@ -3,6 +3,8 @@ package handlers
|
|||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"os"
|
||||
"strings"
|
||||
|
||||
"veza-backend-api/internal/core/auth" // Added import for authcore
|
||||
"veza-backend-api/internal/services"
|
||||
|
|
@ -12,6 +14,12 @@ import (
|
|||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
// isProductionEnv reports whether APP_ENV is set to "production". Used to
|
||||
// decide whether SMTP delivery failures should surface as HTTP 500.
|
||||
func isProductionEnv() bool {
|
||||
return strings.EqualFold(os.Getenv("APP_ENV"), "production")
|
||||
}
|
||||
|
||||
// RequestPasswordResetRequest represents a request to reset password
|
||||
// T0193: Request structure for password reset endpoint
|
||||
// MOD-P1-001: Ajout tags validate pour validation systématique
|
||||
|
|
@ -129,14 +137,19 @@ func RequestPasswordResetWithInterfaces(
|
|||
return
|
||||
}
|
||||
|
||||
// Send email
|
||||
// Send email. In production SMTP must work — a silent log-only failure
|
||||
// would leave the user stuck with no way to reset. In development we
|
||||
// keep the generic success response so local sign-ups keep flowing.
|
||||
if err := emailService.SendPasswordResetEmail(user.ID, user.Email, token); err != nil {
|
||||
// Log but don't fail - user should still get success message
|
||||
logger.Error("Failed to send password reset email",
|
||||
zap.String("user_id", user.ID.String()),
|
||||
zap.String("email", user.Email),
|
||||
zap.Error(err),
|
||||
)
|
||||
if isProductionEnv() {
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": "failed to send password reset email"})
|
||||
return
|
||||
}
|
||||
}
|
||||
|
||||
// BE-SEC-013: Log password reset request
|
||||
|
|
|
|||
111
veza-backend-api/internal/jobs/cleanup_orphan_tracks.go
Normal file
111
veza-backend-api/internal/jobs/cleanup_orphan_tracks.go
Normal file
|
|
@ -0,0 +1,111 @@
|
|||
package jobs
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"time"
|
||||
|
||||
"veza-backend-api/internal/database"
|
||||
"veza-backend-api/internal/models"
|
||||
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
// OrphanTrackAgeThreshold is how long a track can stay in the `processing`
|
||||
// state before the cleanup job considers it abandoned.
|
||||
const OrphanTrackAgeThreshold = time.Hour
|
||||
|
||||
// CleanupOrphanTracks finds tracks stuck in `processing` whose source file
|
||||
// has disappeared from disk (upload service crashed mid-write, disk cleanup,
|
||||
// container restart during a long upload, etc.) and flips them to `failed`
|
||||
// with an explanatory status message. It never deletes the row.
|
||||
//
|
||||
// Safe to run repeatedly: already-failed rows are ignored, and tracks still
|
||||
// present on disk are left alone.
|
||||
func CleanupOrphanTracks(ctx context.Context, db *database.Database, logger *zap.Logger) error {
|
||||
if db == nil || db.GormDB == nil {
|
||||
return nil
|
||||
}
|
||||
cutoff := time.Now().Add(-OrphanTrackAgeThreshold)
|
||||
|
||||
var stuck []models.Track
|
||||
if err := db.GormDB.WithContext(ctx).
|
||||
Where("status = ? AND created_at < ?", models.TrackStatusProcessing, cutoff).
|
||||
Find(&stuck).Error; err != nil {
|
||||
logger.Error("Failed to query stuck processing tracks", zap.Error(err))
|
||||
return err
|
||||
}
|
||||
|
||||
if len(stuck) == 0 {
|
||||
logger.Debug("Orphan tracks cleanup: no tracks older than threshold in processing state",
|
||||
zap.Duration("age_threshold", OrphanTrackAgeThreshold),
|
||||
)
|
||||
return nil
|
||||
}
|
||||
|
||||
failed := 0
|
||||
for i := range stuck {
|
||||
track := &stuck[i]
|
||||
if _, err := os.Stat(track.FilePath); err == nil {
|
||||
// File still there — uploader is slow, not dead. Leave the row.
|
||||
continue
|
||||
} else if !os.IsNotExist(err) {
|
||||
logger.Warn("Could not stat track file while checking orphan",
|
||||
zap.String("track_id", track.ID.String()),
|
||||
zap.String("file_path", track.FilePath),
|
||||
zap.Error(err),
|
||||
)
|
||||
continue
|
||||
}
|
||||
|
||||
updates := map[string]interface{}{
|
||||
"status": models.TrackStatusFailed,
|
||||
"status_message": "orphan cleanup: file missing on disk after >1h in processing",
|
||||
}
|
||||
if err := db.GormDB.WithContext(ctx).
|
||||
Model(&models.Track{}).
|
||||
Where("id = ? AND status = ?", track.ID, models.TrackStatusProcessing).
|
||||
Updates(updates).Error; err != nil {
|
||||
logger.Error("Failed to mark orphan track as failed",
|
||||
zap.String("track_id", track.ID.String()),
|
||||
zap.Error(err),
|
||||
)
|
||||
continue
|
||||
}
|
||||
failed++
|
||||
logger.Warn("Orphan track flipped to failed",
|
||||
zap.String("track_id", track.ID.String()),
|
||||
zap.String("file_path", track.FilePath),
|
||||
zap.Duration("age", time.Since(track.CreatedAt)),
|
||||
)
|
||||
}
|
||||
|
||||
logger.Info("Orphan tracks cleanup complete",
|
||||
zap.Int("candidates", len(stuck)),
|
||||
zap.Int("marked_failed", failed),
|
||||
)
|
||||
return nil
|
||||
}
|
||||
|
||||
// ScheduleOrphanTracksCleanup kicks off a background goroutine that runs
|
||||
// CleanupOrphanTracks once immediately and then every hour thereafter.
|
||||
// Mirrors the pattern used by ScheduleSessionCleanupJob.
|
||||
func ScheduleOrphanTracksCleanup(db *database.Database, logger *zap.Logger) {
|
||||
ticker := time.NewTicker(time.Hour)
|
||||
go func() {
|
||||
ctx := context.Background()
|
||||
|
||||
if err := CleanupOrphanTracks(ctx, db, logger); err != nil {
|
||||
logger.Error("Initial orphan tracks cleanup failed", zap.Error(err))
|
||||
}
|
||||
|
||||
for range ticker.C {
|
||||
if err := CleanupOrphanTracks(ctx, db, logger); err != nil {
|
||||
logger.Error("Scheduled orphan tracks cleanup failed", zap.Error(err))
|
||||
}
|
||||
}
|
||||
}()
|
||||
logger.Info("Orphan tracks cleanup scheduled to run hourly",
|
||||
zap.Duration("age_threshold", OrphanTrackAgeThreshold),
|
||||
)
|
||||
}
|
||||
122
veza-backend-api/internal/jobs/cleanup_orphan_tracks_test.go
Normal file
122
veza-backend-api/internal/jobs/cleanup_orphan_tracks_test.go
Normal file
|
|
@ -0,0 +1,122 @@
|
|||
package jobs
|
||||
|
||||
import (
|
||||
"context"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"veza-backend-api/internal/database"
|
||||
"veza-backend-api/internal/models"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"go.uber.org/zap/zaptest"
|
||||
"gorm.io/driver/sqlite"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
func setupOrphanTestDB(t *testing.T) (*database.Database, *gorm.DB) {
|
||||
gormDB, err := gorm.Open(sqlite.Open(":memory:"), &gorm.Config{})
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, gormDB.AutoMigrate(&models.Track{}, &models.User{}))
|
||||
|
||||
db := &database.Database{GormDB: gormDB}
|
||||
return db, gormDB
|
||||
}
|
||||
|
||||
func insertTestTrack(t *testing.T, gormDB *gorm.DB, filePath string, status models.TrackStatus, createdAt time.Time) uuid.UUID {
|
||||
ownerID := uuid.New()
|
||||
require.NoError(t, gormDB.Create(&models.User{
|
||||
ID: ownerID,
|
||||
Username: "owner-" + ownerID.String()[:8],
|
||||
Email: ownerID.String() + "@test.local",
|
||||
}).Error)
|
||||
|
||||
trackID := uuid.New()
|
||||
track := &models.Track{
|
||||
ID: trackID,
|
||||
UserID: ownerID,
|
||||
Title: "Test",
|
||||
Artist: "Test",
|
||||
Duration: 1,
|
||||
FilePath: filePath,
|
||||
FileSize: 1,
|
||||
Status: status,
|
||||
CreatedAt: createdAt,
|
||||
}
|
||||
require.NoError(t, gormDB.Create(track).Error)
|
||||
// created_at is managed by autoCreateTime — force our sentinel time.
|
||||
require.NoError(t, gormDB.Model(&models.Track{}).Where("id = ?", trackID).Update("created_at", createdAt).Error)
|
||||
return trackID
|
||||
}
|
||||
|
||||
func TestCleanupOrphanTracks_FlipsStuckMissingFile(t *testing.T) {
|
||||
db, gormDB := setupOrphanTestDB(t)
|
||||
logger := zaptest.NewLogger(t)
|
||||
|
||||
missingPath := filepath.Join(t.TempDir(), "vanished.mp3")
|
||||
id := insertTestTrack(t, gormDB, missingPath, models.TrackStatusProcessing, time.Now().Add(-2*time.Hour))
|
||||
|
||||
require.NoError(t, CleanupOrphanTracks(context.Background(), db, logger))
|
||||
|
||||
var after models.Track
|
||||
require.NoError(t, gormDB.First(&after, "id = ?", id).Error)
|
||||
assert.Equal(t, models.TrackStatusFailed, after.Status)
|
||||
assert.Contains(t, after.StatusMessage, "orphan cleanup")
|
||||
}
|
||||
|
||||
func TestCleanupOrphanTracks_LeavesFilePresent(t *testing.T) {
|
||||
db, gormDB := setupOrphanTestDB(t)
|
||||
logger := zaptest.NewLogger(t)
|
||||
|
||||
goodPath := filepath.Join(t.TempDir(), "still-here.mp3")
|
||||
require.NoError(t, os.WriteFile(goodPath, []byte("audio"), 0o600))
|
||||
id := insertTestTrack(t, gormDB, goodPath, models.TrackStatusProcessing, time.Now().Add(-2*time.Hour))
|
||||
|
||||
require.NoError(t, CleanupOrphanTracks(context.Background(), db, logger))
|
||||
|
||||
var after models.Track
|
||||
require.NoError(t, gormDB.First(&after, "id = ?", id).Error)
|
||||
assert.Equal(t, models.TrackStatusProcessing, after.Status, "slow uploads should not be marked failed")
|
||||
}
|
||||
|
||||
func TestCleanupOrphanTracks_LeavesRecent(t *testing.T) {
|
||||
db, gormDB := setupOrphanTestDB(t)
|
||||
logger := zaptest.NewLogger(t)
|
||||
|
||||
missingPath := filepath.Join(t.TempDir(), "recent.mp3")
|
||||
id := insertTestTrack(t, gormDB, missingPath, models.TrackStatusProcessing, time.Now().Add(-10*time.Minute))
|
||||
|
||||
require.NoError(t, CleanupOrphanTracks(context.Background(), db, logger))
|
||||
|
||||
var after models.Track
|
||||
require.NoError(t, gormDB.First(&after, "id = ?", id).Error)
|
||||
assert.Equal(t, models.TrackStatusProcessing, after.Status, "tracks younger than threshold must be spared")
|
||||
}
|
||||
|
||||
func TestCleanupOrphanTracks_IgnoresAlreadyFailed(t *testing.T) {
|
||||
db, gormDB := setupOrphanTestDB(t)
|
||||
logger := zaptest.NewLogger(t)
|
||||
|
||||
missingPath := filepath.Join(t.TempDir(), "old.mp3")
|
||||
id := insertTestTrack(t, gormDB, missingPath, models.TrackStatusFailed, time.Now().Add(-3*time.Hour))
|
||||
// Seed a message we'd notice if the job overwrote it.
|
||||
require.NoError(t, gormDB.Model(&models.Track{}).Where("id = ?", id).Update("status_message", "previous failure").Error)
|
||||
|
||||
require.NoError(t, CleanupOrphanTracks(context.Background(), db, logger))
|
||||
|
||||
var after models.Track
|
||||
require.NoError(t, gormDB.First(&after, "id = ?", id).Error)
|
||||
assert.Equal(t, models.TrackStatusFailed, after.Status)
|
||||
assert.Equal(t, "previous failure", after.StatusMessage, "job must not rewrite unrelated failed rows")
|
||||
}
|
||||
|
||||
func TestCleanupOrphanTracks_NilDatabaseIsNoop(t *testing.T) {
|
||||
logger := zaptest.NewLogger(t)
|
||||
assert.NoError(t, CleanupOrphanTracks(context.Background(), nil, logger))
|
||||
assert.NoError(t, CleanupOrphanTracks(context.Background(), &database.Database{}, logger))
|
||||
}
|
||||
|
|
@ -1,39 +1,134 @@
|
|||
package middleware
|
||||
|
||||
import (
|
||||
"context"
|
||||
"net/http"
|
||||
"os"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"go.uber.org/zap"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
// maintenanceState carries the latest cached view of the platform-wide
|
||||
// maintenance flag. It is refreshed lazily from `platform_settings` when a
|
||||
// request comes in after the TTL has expired, so operators flipping the flag
|
||||
// on one pod propagate to every other pod within a bounded window (10s).
|
||||
type maintenanceState struct {
|
||||
mu sync.RWMutex
|
||||
enabled bool
|
||||
lastCheck time.Time
|
||||
db *gorm.DB
|
||||
logger *zap.Logger
|
||||
ttl time.Duration
|
||||
}
|
||||
|
||||
const defaultMaintenanceCacheTTL = 10 * time.Second
|
||||
|
||||
var (
|
||||
maintenanceMode bool
|
||||
maintenanceModeOnce sync.Once
|
||||
maintenanceMu sync.RWMutex
|
||||
state = &maintenanceState{ttl: defaultMaintenanceCacheTTL}
|
||||
maintenanceInitMu sync.Mutex
|
||||
)
|
||||
|
||||
func init() {
|
||||
maintenanceModeOnce.Do(func() {
|
||||
v := os.Getenv("MAINTENANCE_MODE")
|
||||
maintenanceMode = v == "true" || v == "1"
|
||||
})
|
||||
v := os.Getenv("MAINTENANCE_MODE")
|
||||
state.mu.Lock()
|
||||
state.enabled = v == "true" || v == "1"
|
||||
state.mu.Unlock()
|
||||
}
|
||||
|
||||
// MaintenanceModeEnabled returns whether maintenance mode is active
|
||||
// InitMaintenanceMode wires the DB pool so subsequent MaintenanceModeEnabled()
|
||||
// calls refresh from `platform_settings.maintenance_mode` with a TTL cache.
|
||||
// Safe to call more than once (last write wins). If db is nil the middleware
|
||||
// falls back to the in-memory state seeded from MAINTENANCE_MODE.
|
||||
func InitMaintenanceMode(db *gorm.DB, logger *zap.Logger) {
|
||||
maintenanceInitMu.Lock()
|
||||
defer maintenanceInitMu.Unlock()
|
||||
|
||||
if logger == nil {
|
||||
logger = zap.NewNop()
|
||||
}
|
||||
state.mu.Lock()
|
||||
state.db = db
|
||||
state.logger = logger
|
||||
state.lastCheck = time.Time{} // force refresh on first call
|
||||
state.mu.Unlock()
|
||||
|
||||
// Prime the cache so the very first request doesn't see a stale value.
|
||||
refreshFromDB(context.Background())
|
||||
}
|
||||
|
||||
// refreshFromDB reads the current value from the DB and updates the cache.
|
||||
// Never propagates errors to callers — a broken DB should not silently
|
||||
// enable maintenance mode, so the previous cached value wins.
|
||||
func refreshFromDB(ctx context.Context) {
|
||||
state.mu.RLock()
|
||||
db := state.db
|
||||
logger := state.logger
|
||||
state.mu.RUnlock()
|
||||
if db == nil {
|
||||
return
|
||||
}
|
||||
|
||||
var row struct {
|
||||
ValueBool *bool `gorm:"column:value_bool"`
|
||||
}
|
||||
err := db.WithContext(ctx).
|
||||
Table("platform_settings").
|
||||
Select("value_bool").
|
||||
Where("key = ?", "maintenance_mode").
|
||||
Take(&row).Error
|
||||
|
||||
state.mu.Lock()
|
||||
state.lastCheck = time.Now()
|
||||
state.mu.Unlock()
|
||||
|
||||
if err != nil {
|
||||
if err != gorm.ErrRecordNotFound && logger != nil {
|
||||
logger.Warn("Failed to refresh maintenance flag from DB — keeping cached value",
|
||||
zap.Error(err),
|
||||
)
|
||||
}
|
||||
return
|
||||
}
|
||||
|
||||
enabled := row.ValueBool != nil && *row.ValueBool
|
||||
state.mu.Lock()
|
||||
state.enabled = enabled
|
||||
state.mu.Unlock()
|
||||
}
|
||||
|
||||
// MaintenanceModeEnabled returns the cached maintenance flag, refreshing from
|
||||
// the DB if the TTL has expired and a DB pool has been wired.
|
||||
func MaintenanceModeEnabled() bool {
|
||||
maintenanceMu.RLock()
|
||||
defer maintenanceMu.RUnlock()
|
||||
return maintenanceMode
|
||||
state.mu.RLock()
|
||||
enabled := state.enabled
|
||||
lastCheck := state.lastCheck
|
||||
hasDB := state.db != nil
|
||||
ttl := state.ttl
|
||||
state.mu.RUnlock()
|
||||
|
||||
if hasDB && time.Since(lastCheck) > ttl {
|
||||
refreshFromDB(context.Background())
|
||||
state.mu.RLock()
|
||||
enabled = state.enabled
|
||||
state.mu.RUnlock()
|
||||
}
|
||||
return enabled
|
||||
}
|
||||
|
||||
// SetMaintenanceMode sets maintenance mode (for admin toggle)
|
||||
// SetMaintenanceMode sets the in-memory flag without touching the DB. It is
|
||||
// kept for tests and for cases where a caller already owns the DB write — it
|
||||
// does not persist the value across pods. Use PlatformSettings to change
|
||||
// state across a deployment.
|
||||
func SetMaintenanceMode(enabled bool) {
|
||||
maintenanceMu.Lock()
|
||||
defer maintenanceMu.Unlock()
|
||||
maintenanceMode = enabled
|
||||
state.mu.Lock()
|
||||
state.enabled = enabled
|
||||
state.lastCheck = time.Now().Add(state.ttl) // suppress the next DB refresh
|
||||
state.mu.Unlock()
|
||||
}
|
||||
|
||||
// MaintenanceGin returns a Gin middleware for maintenance mode.
|
||||
|
|
|
|||
|
|
@ -4,9 +4,14 @@ import (
|
|||
"net/http"
|
||||
"net/http/httptest"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"go.uber.org/zap/zaptest"
|
||||
"gorm.io/driver/sqlite"
|
||||
"gorm.io/gorm"
|
||||
)
|
||||
|
||||
func TestMaintenanceGin_Disabled(t *testing.T) {
|
||||
|
|
@ -81,3 +86,53 @@ func TestMaintenanceGin_AdminExempt(t *testing.T) {
|
|||
|
||||
assert.Equal(t, http.StatusOK, w.Code)
|
||||
}
|
||||
|
||||
// TestMaintenanceGin_DBBacked verifies that changes written to
|
||||
// platform_settings propagate to MaintenanceModeEnabled() once the cache TTL
|
||||
// lapses. This guards the multi-pod correctness claim of v1.0.4.
|
||||
func TestMaintenanceGin_DBBacked(t *testing.T) {
|
||||
db, err := gorm.Open(sqlite.Open(":memory:"), &gorm.Config{})
|
||||
require.NoError(t, err)
|
||||
|
||||
require.NoError(t, db.Exec(`
|
||||
CREATE TABLE platform_settings (
|
||||
id INTEGER PRIMARY KEY AUTOINCREMENT,
|
||||
key TEXT NOT NULL UNIQUE,
|
||||
value_bool BOOLEAN,
|
||||
value_text TEXT,
|
||||
description TEXT,
|
||||
updated_at DATETIME,
|
||||
updated_by TEXT
|
||||
)`).Error)
|
||||
require.NoError(t, db.Exec(
|
||||
`INSERT INTO platform_settings (key, value_bool, description) VALUES ('maintenance_mode', 0, 'test')`,
|
||||
).Error)
|
||||
|
||||
// Start from a clean slate so no prior test leaked state into the package
|
||||
// globals.
|
||||
SetMaintenanceMode(false)
|
||||
defer SetMaintenanceMode(false)
|
||||
|
||||
InitMaintenanceMode(db, zaptest.NewLogger(t))
|
||||
// Shrink the TTL so we don't have to sleep 10s.
|
||||
state.mu.Lock()
|
||||
state.ttl = 50 * time.Millisecond
|
||||
state.mu.Unlock()
|
||||
defer func() {
|
||||
state.mu.Lock()
|
||||
state.ttl = defaultMaintenanceCacheTTL
|
||||
state.db = nil
|
||||
state.mu.Unlock()
|
||||
}()
|
||||
|
||||
assert.False(t, MaintenanceModeEnabled(), "seeded value=0 should read as off")
|
||||
|
||||
// Flip the DB row; before TTL the cached value still says off.
|
||||
require.NoError(t, db.Exec(
|
||||
`UPDATE platform_settings SET value_bool = 1 WHERE key = 'maintenance_mode'`,
|
||||
).Error)
|
||||
assert.False(t, MaintenanceModeEnabled(), "cache should still report off before TTL")
|
||||
|
||||
time.Sleep(70 * time.Millisecond)
|
||||
assert.True(t, MaintenanceModeEnabled(), "after TTL the refresh should pick up the new value")
|
||||
}
|
||||
|
|
|
|||
|
|
@ -84,6 +84,25 @@ func ResponseCache(cfg ResponseCacheConfig) gin.HandlerFunc {
|
|||
return
|
||||
}
|
||||
|
||||
// Skip caching for binary, range-aware media endpoints. Caching these
|
||||
// strips Accept-Ranges and returns the full body for every request —
|
||||
// the <audio> element can't seek and the cache also corrupts the
|
||||
// byte stream when JSON-serializing non-UTF-8 bytes.
|
||||
path := c.Request.URL.Path
|
||||
if strings.Contains(path, "/stream") ||
|
||||
strings.Contains(path, "/download") ||
|
||||
strings.Contains(path, "/hls/") {
|
||||
c.Next()
|
||||
return
|
||||
}
|
||||
|
||||
// Any explicit Range request must go straight to the handler so
|
||||
// http.ServeContent can honor it.
|
||||
if c.GetHeader("Range") != "" {
|
||||
c.Next()
|
||||
return
|
||||
}
|
||||
|
||||
// Generate cache key from URL + query params
|
||||
cacheKey := generateCacheKey(cfg.KeyPrefix, c.Request.URL.RequestURI())
|
||||
|
||||
|
|
|
|||
|
|
@ -20,6 +20,12 @@ func NewChatPubSubService(redisClient *redis.Client, logger *zap.Logger) *ChatPu
|
|||
if logger == nil {
|
||||
logger = zap.NewNop()
|
||||
}
|
||||
if redisClient == nil {
|
||||
// In multi-pod deployments the in-memory fallback silently breaks:
|
||||
// messages published on pod A are never seen by subscribers on pod B.
|
||||
// Emit a loud startup error so the misconfiguration is noticed.
|
||||
logger.Error("Redis unavailable, falling back to in-memory PubSub — cross-instance messages will be lost. Set REDIS_URL and restart for multi-pod correctness")
|
||||
}
|
||||
return &ChatPubSubService{
|
||||
redisClient: redisClient,
|
||||
logger: logger,
|
||||
|
|
@ -36,7 +42,13 @@ func (s *ChatPubSubService) Publish(ctx context.Context, roomID uuid.UUID, messa
|
|||
|
||||
if s.redisClient != nil {
|
||||
if err := s.redisClient.Publish(ctx, channel, message).Err(); err != nil {
|
||||
s.logger.Warn("Redis publish failed, using in-memory fallback", zap.Error(err))
|
||||
// ERROR, not Warn: the in-memory fallback only reaches subscribers
|
||||
// on this pod — a multi-pod chat becomes partitioned until Redis
|
||||
// recovers. Operators should page on this log line.
|
||||
s.logger.Error("Redis publish failed, in-memory fallback will not reach other pods",
|
||||
zap.String("channel", channel),
|
||||
zap.Error(err),
|
||||
)
|
||||
s.publishInMemory(channel, message)
|
||||
}
|
||||
return nil
|
||||
|
|
|
|||
45
veza-backend-api/internal/services/chat_pubsub_test.go
Normal file
45
veza-backend-api/internal/services/chat_pubsub_test.go
Normal file
|
|
@ -0,0 +1,45 @@
|
|||
package services
|
||||
|
||||
import (
|
||||
"context"
|
||||
"testing"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"go.uber.org/zap"
|
||||
"go.uber.org/zap/zapcore"
|
||||
"go.uber.org/zap/zaptest/observer"
|
||||
)
|
||||
|
||||
func TestChatPubSubService_NilRedisLogsError(t *testing.T) {
|
||||
core, observed := observer.New(zapcore.ErrorLevel)
|
||||
logger := zap.New(core)
|
||||
|
||||
_ = NewChatPubSubService(nil, logger)
|
||||
|
||||
entries := observed.All()
|
||||
assert.Len(t, entries, 1, "constructor should emit exactly one ERROR log when Redis is nil")
|
||||
assert.Equal(t, zapcore.ErrorLevel, entries[0].Level)
|
||||
assert.Contains(t, entries[0].Message, "cross-instance messages will be lost")
|
||||
}
|
||||
|
||||
func TestChatPubSubService_InMemoryFanout(t *testing.T) {
|
||||
svc := NewChatPubSubService(nil, zap.NewNop())
|
||||
|
||||
ctx := context.Background()
|
||||
roomID := uuid.New()
|
||||
|
||||
ch, cancel, err := svc.Subscribe(ctx, roomID)
|
||||
assert.NoError(t, err)
|
||||
defer cancel()
|
||||
|
||||
err = svc.Publish(ctx, roomID, []byte("hello"))
|
||||
assert.NoError(t, err)
|
||||
|
||||
select {
|
||||
case msg := <-ch:
|
||||
assert.Equal(t, "hello", string(msg))
|
||||
default:
|
||||
t.Fatal("expected message on in-memory channel")
|
||||
}
|
||||
}
|
||||
21
veza-backend-api/migrations/976_platform_settings.sql
Normal file
21
veza-backend-api/migrations/976_platform_settings.sql
Normal file
|
|
@ -0,0 +1,21 @@
|
|||
-- Migration 976: Platform-wide runtime settings (v1.0.4)
|
||||
-- Replaces in-memory maintenance toggle with a DB-backed key/value table so
|
||||
-- all pods see the same state. Values are typed to avoid string-parsing in
|
||||
-- the hot path.
|
||||
|
||||
CREATE TABLE IF NOT EXISTS public.platform_settings (
|
||||
id SERIAL PRIMARY KEY,
|
||||
key TEXT NOT NULL UNIQUE,
|
||||
value_bool BOOLEAN,
|
||||
value_text TEXT,
|
||||
description TEXT NOT NULL DEFAULT '',
|
||||
updated_at TIMESTAMPTZ NOT NULL DEFAULT NOW(),
|
||||
updated_by UUID REFERENCES public.users(id) ON DELETE SET NULL
|
||||
);
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_platform_settings_key ON public.platform_settings(key);
|
||||
|
||||
-- Seed the maintenance_mode row; idempotent so rerunning migrations is safe.
|
||||
INSERT INTO public.platform_settings (key, value_bool, description)
|
||||
VALUES ('maintenance_mode', FALSE, 'When TRUE, all API requests outside the exempt list return 503.')
|
||||
ON CONFLICT (key) DO NOTHING;
|
||||
Loading…
Reference in a new issue