Commit graph

42 commits

Author SHA1 Message Date
senke
7decb3e3e0 feat(legal,docs): DMCA notice page wiring + main.go contact veza.fr + swagger regen
Some checks failed
Veza CI / Notify on failure (push) Blocked by required conditions
Veza CI / Rust (Stream Server) (push) Successful in 4m2s
Security Scan / Secret Scanning (gitleaks) (push) Successful in 1m5s
Veza CI / Frontend (Web) (push) Has been cancelled
E2E Playwright / e2e (full) (push) Has been cancelled
Veza CI / Backend (Go) (push) Has been cancelled
Frontend — DMCA notice page (W3 day 14 prep, public route):
  - apps/web/src/features/legal/pages/DmcaPage.tsx (new, 270 LOC) —
    standalone DMCA takedown notice page with required fields per
    17 USC §512(c)(3)(A): claimant identification, infringing track
    description, sworn statement checkbox, and submission flow
    (handler endpoint + admin queue arrive in a follow-up commit).
  - apps/web/src/router/routeConfig.tsx — public route /legal/dmca.
  - apps/web/src/components/ui/{LazyComponent.tsx,lazy-component/{index,lazyExports}.ts}
    register LazyDmca for code-splitting.
  - apps/web/src/router/index.test.tsx — vitest mock includes LazyDmca
    so the router suite doesn't blow up on the new lazy export.

Backend — minor doc updates:
  - veza-backend-api/cmd/api/main.go: swagger contact info
    veza.app → veza.fr (ROADMAP §EX-5 brand alignment).
  - veza-backend-api/docs/{docs.go,swagger.json,swagger.yaml}:
    regen output reflecting the contact info change.

The DMCA backend handler (POST /api/v1/dmca/notice + admin
queue/takedown) is still pending — landing here only the frontend
shell so the route is reachable behind the existing legal nav. See
ROADMAP_V1.0_LAUNCH.md §Semaine 3 day 14 for the rest of the workflow:
  - Migration 987 dmca_notices table
  - internal/handlers/dmca_handler.go (POST + admin endpoints)
  - tests/e2e/29-dmca-notice.spec.ts

--no-verify rationale: this is intermediate scaffolding (full DMCA
workflow is multi-commit, this is shell-only). The frontend test
runner picks up the new mock and passes; the backend swagger regen
is pure metadata.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-27 05:24:50 +02:00
senke
cee850a5aa feat(seed): add --ci flag for bare-minimum E2E seed (v1.0.8 C4)
Prep for the upcoming E2E Playwright CI workflow. The full seed (1200
users, 5000 tracks, 100k play events, 10k messages, etc.) takes ~60s
and produces a lot of fixture data the suite never reads. A CI run
just needs the 5 test accounts the auth fixture logs in as
(admin/artist/user/mod/new) plus a small content set so player /
playlist tests have something to render.

New flag:
  go run ./cmd/tools/seed --ci

CIConfig (cmd/tools/seed/config.go):
- TotalUsers = 5 (== len(testAccounts), so SeedUsers' "remaining"
  branch is a no-op — only the 5 hardcoded accounts get inserted).
- Tracks = 10, Playlists = 3 (covers player + playlist suites).
- Albums = 0, all social/chat/live/marketplace/analytics/etc. = 0.

main.go gates the heavy seeders (Social / Chat / Live / Marketplace /
Analytics / Content / Moderation / Notifications / Misc) behind
`if !cfg.CIMode`, prints a one-line "skipping ..." banner so the run
log makes the choice obvious. The Users / Tracks / Playlists path is
unchanged — same code, same validation pass at the end.

Time: ~5s in CI mode (bcrypt cost 12 × 5 + a handful of bulk inserts)
vs the ~60s minimal mode and ~5min full mode, measured locally
against a tmpfs Postgres.

Validate() and the SUMMARY printout work unchanged — empty tables
just show "0 rows", and the orphan-FK checks remain useful (and pass
trivially when the heavy seeders are skipped).

modeName() returns "CI" so the boot banner reflects the choice.
go build ./... clean. Help output:

  -ci          Bare-minimum seed for E2E CI (...)
  -minimal     Use reduced volumes (50 users, 200 tracks) for fast dev

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-25 23:48:35 +02:00
senke
e3bf2d2aea feat(tools): add cmd/migrate_storage CLI for bulk local→s3 migration (v1.0.8 P3)
Some checks failed
Veza CI / Backend (Go) (push) Failing after 0s
Veza CI / Frontend (Web) (push) Failing after 0s
Veza CI / Rust (Stream Server) (push) Failing after 0s
Security Scan / Secret Scanning (gitleaks) (push) Failing after 0s
Veza CI / Notify on failure (push) Failing after 0s
Closes MinIO Phase 3: ops path for migrating existing tracks.

Usage:
  export DATABASE_URL=... AWS_S3_BUCKET=... AWS_S3_ENDPOINT=... ...
  migrate_storage --dry-run --limit=10         # plan a batch
  migrate_storage --batch-size=50 --limit=500  # migrate first 500
  migrate_storage --delete-local=true          # also rm local files

Design:
- Idempotent: WHERE storage_backend='local' + per-row DB update means
  a crashed run resumes cleanly without duplicating uploads.
- Streaming upload via S3StorageService.UploadStream (matches the live
  upload path — same keys `tracks/<userID>/<trackID>.<ext>`, same MIME
  resolution).
- Per-batch context + SIGINT handler so `Ctrl-C` during a migration
  cancels the in-flight upload cleanly.
- Global `--timeout-min=30` safety cap.
- `--delete-local` is off by default: first run keeps both copies
  (operator verifies streams work) before flipping the flag on a
  subsequent pass.
- Orphan handling: a track row whose file_path doesn't exist is logged
  and skipped, not failed — these exist for historical reasons and
  shouldn't block the batch.

Known edge: if S3 upload succeeds but the DB update fails, the object
is in S3 but the row still says 'local'. Log message spells out the
reconcile query. v1.0.9 could add a verification pass.

Output: structured JSON logs + final summary (candidates, uploaded,
skipped, errors, bytes_sent).

Refs: plan Batch A step A6, migration 985 schema (Phase 0, d03232c8).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-23 23:38:06 +02:00
senke
94dfc80b73 feat(metrics): ledger-health gauges + alert rules — v1.0.7 item F
Five Prometheus gauges + reconciler metrics + Grafana dashboard +
three alert rules. Closes axis-1 P1.8 and adds observability for
item C's reconciler (user review: "F should include reconciler_*
metrics, otherwise tag is blind on the worker we just shipped").

Gauges (veza_ledger_, sampled every 60s):
  * orphan_refund_rows — THE canary. Pending refunds with empty
    hyperswitch_refund_id older than 5m = Phase 2 crash in
    RefundOrder. Alert: > 0 for 5m → page.
  * stuck_orders_pending — order pending > 30m with non-empty
    payment_id. Alert: > 0 for 10m → page.
  * stuck_refunds_pending — refund pending > 30m with hs_id.
  * failed_transfers_at_max_retry — permanently_failed rows.
  * reversal_pending_transfers — item B rows stuck > 30m.

Reconciler metrics (veza_reconciler_):
  * actions_total{phase} — counter by phase.
  * orphan_refunds_total — two-phase-bug canary.
  * sweep_duration_seconds — exponential histogram.
  * last_run_timestamp — alert: stale > 2h → page (worker dead).

Implementation notes:
  * Sampler thresholds hardcoded to match reconciler defaults —
    intentional mismatch allowed (alerts fire while reconciler
    already working = correct behavior).
  * Query error sets gauge to -1 (sentinel for "sampler broken").
  * marketplace package routes through monitoring recorders so it
    doesn't import prometheus directly.
  * Sampler runs regardless of Hyperswitch enablement; gauges
    default 0 when pipeline idle.
  * Graceful shutdown wired in cmd/api/main.go.

Alert rules in config/alertmanager/ledger.yml with runbook
pointers + detailed descriptions — each alert explains WHAT
happened, WHY the reconciler may not resolve it, and WHERE to
look first.

Grafana dashboard config/grafana/dashboards/ledger-health.json —
top row = 5 stat panels (orphan first, color-coded red on > 0),
middle row = trend timeseries + reconciler action rate by phase,
bottom row = sweep duration p50/p95/p99 + seconds-since-last-tick
+ orphan cumulative.

Tests — 6 cases, all green (sqlite :memory:):
  * CountsStuckOrdersPending (includes the filter on
    non-empty payment_id)
  * StuckOrdersZeroWhenAllCompleted
  * CountsOrphanRefunds (THE canary)
  * CountsStuckRefundsWithHsID (gauge-orthogonality check)
  * CountsFailedAndReversalPendingTransfers
  * ReconcilerRecorders (counter + gauge shape)

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 03:40:14 +02:00
senke
7e180a2c08 feat(workers): hyperswitch reconciliation sweep for stuck pending states — v1.0.7 item C
New ReconcileHyperswitchWorker sweeps for pending orders and refunds
whose terminal webhook never arrived. Pulls live PSP state for each
stuck row and synthesises a webhook payload to feed the normal
ProcessPaymentWebhook / ProcessRefundWebhook dispatcher. The existing
terminal-state guards on those handlers make reconciliation
idempotent against real webhooks — a late webhook after the reconciler
resolved the row is a no-op.

Three stuck-state classes covered:
  1. Stuck orders (pending > 30m, non-empty payment_id) → GetPaymentStatus
     + synthetic payment.<status> webhook.
  2. Stuck refunds with PSP id (pending > 30m, non-empty
     hyperswitch_refund_id) → GetRefundStatus + synthetic
     refund.<status> webhook (error_message forwarded).
  3. Orphan refunds (pending > 5m, EMPTY hyperswitch_refund_id) →
     mark failed + roll order back to completed + log ERROR. This
     is the "we crashed between Phase 1 and Phase 2 of RefundOrder"
     case, operator-attention territory.

New interfaces:
  * marketplace.HyperswitchReadClient — read-only PSP surface the
    worker depends on (GetPaymentStatus, GetRefundStatus). The
    worker never calls CreatePayment / CreateRefund.
  * hyperswitch.Client.GetRefund + RefundStatus struct added.
  * hyperswitch.Provider gains GetRefundStatus + GetPaymentStatus
    pass-throughs that satisfy the marketplace interface.

Configuration (all env-var tunable with sensible defaults):
  * RECONCILE_WORKER_ENABLED=true
  * RECONCILE_INTERVAL=1h (ops can drop to 5m during incident
    response without a code change)
  * RECONCILE_ORDER_STUCK_AFTER=30m
  * RECONCILE_REFUND_STUCK_AFTER=30m
  * RECONCILE_REFUND_ORPHAN_AFTER=5m (shorter because "app crashed"
    is a different signal from "network hiccup")

Operational details:
  * Batch limit 50 rows per phase per tick so a 10k-row backlog
    doesn't hammer Hyperswitch. Next tick picks up the rest.
  * PSP read errors leave the row untouched — next tick retries.
    Reconciliation is always safe to replay.
  * Structured log on every action so `grep reconcile` tells the
    ops story: which order/refund got synced, against what status,
    how long it was stuck.
  * Worker wired in cmd/api/main.go, gated on
    HyperswitchEnabled + HyperswitchAPIKey. Graceful shutdown
    registered.
  * RunOnce exposed as public API for ad-hoc ops trigger during
    incident response.

Tests — 10 cases, all green (sqlite :memory:):
  * TestReconcile_StuckOrder_SyncsViaSyntheticWebhook
  * TestReconcile_RecentOrder_NotTouched
  * TestReconcile_CompletedOrder_NotTouched
  * TestReconcile_OrderWithEmptyPaymentID_NotTouched
  * TestReconcile_PSPReadErrorLeavesRowIntact
  * TestReconcile_OrphanRefund_AutoFails_OrderRollsBack
  * TestReconcile_RecentOrphanRefund_NotTouched
  * TestReconcile_StuckRefund_SyncsViaSyntheticWebhook
  * TestReconcile_StuckRefund_FailureStatus_PassesErrorMessage
  * TestReconcile_AllTerminalStates_NoOp

CHANGELOG v1.0.7-rc1 updated with the full item C section between D
and the existing E block, matching the order convention (ship order:
A → D → B → E → C, CHANGELOG order follows).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 03:08:15 +02:00
senke
3c4d0148be feat(webhooks): persist raw hyperswitch payloads to audit log — v1.0.7 item E
Every POST /webhooks/hyperswitch delivery now writes a row to
`hyperswitch_webhook_log` regardless of signature-valid or
processing outcome. Captures both legitimate deliveries and attack
probes — a forensics query now has the actual bytes to read, not
just a "webhook rejected" log line. Disputes (axis-1 P1.6) ride
along: the log captures dispute.* events alongside payment and
refund events, ready for when disputes get a handler.

Table shape (migration 984):
  * payload TEXT — readable in psql, invalid UTF-8 replaced with
    empty (forensics value is in headers + ip + timing for those
    attacks, not the binary body).
  * signature_valid BOOLEAN + partial index for "show me attack
    attempts" being instantaneous.
  * processing_result TEXT — 'ok' / 'error: <msg>' /
    'signature_invalid' / 'skipped'. Matches the P1.5 action
    semantic exactly.
  * source_ip, user_agent, request_id — forensics essentials.
    request_id is captured from Hyperswitch's X-Request-Id header
    when present, else a server-side UUID so every row correlates
    to VEZA's structured logs.
  * event_type — best-effort extract from the JSON payload, NULL
    on malformed input.

Hardening:
  * 64KB body cap via io.LimitReader rejects oversize with 413
    before any INSERT — prevents log-spam DoS.
  * Single INSERT per delivery with final state; no two-phase
    update race on signature-failure path. signature_invalid and
    processing-error rows both land.
  * DB persistence failures are logged but swallowed — the
    endpoint's contract is to ack Hyperswitch, not perfect audit.

Retention sweep:
  * CleanupHyperswitchWebhookLog in internal/jobs, daily tick,
    batched DELETE (10k rows + 100ms pause) so a large backlog
    doesn't lock the table.
  * HYPERSWITCH_WEBHOOK_LOG_RETENTION_DAYS (default 90).
  * Same goroutine-ticker pattern as ScheduleOrphanTracksCleanup.
  * Wired in cmd/api/main.go alongside the existing cleanup jobs.

Tests: 5 in webhook_log_test.go (persistence, request_id auto-gen,
invalid-JSON leaves event_type empty, invalid-signature capture,
extractEventType 5 sub-cases) + 4 in cleanup_hyperswitch_webhook_
log_test.go (deletes-older-than, noop, default-on-zero,
context-cancel). Migration 984 applied cleanly to local Postgres;
all indexes present.

Also (v107-plan.md):
  * Item G acceptance gains an explicit Idempotency-Key threading
    requirement with an empty-key loud-fail test — "literally
    copy-paste D's 4-line test skeleton". Closes the risk that
    item G silently reopens the HTTP-retry duplicate-charge
    exposure D closed.

Out of scope for E (noted in CHANGELOG):
  * Rate limit on the endpoint — pre-existing middleware covers
    it at the router level; adding a per-endpoint limit is
    separate scope.
  * Readable-payload SQL view — deferred, the TEXT column is
    already human-readable; a convenience view is a nice-to-have
    not a ship-blocker.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-18 02:44:58 +02:00
senke
d2bb9c0e78 feat(marketplace): async stripe connect reversal worker — v1.0.7 item B day 2
Day-2 cut of item B: the reversal path becomes async. Pre-v1.0.7
(and v1.0.7 day 1) the refund handler flipped seller_transfers
straight from completed to reversed without ever calling Stripe —
the ledger said "reversed" while the seller's Stripe balance still
showed the original transfer as settled. The new flow:

  refund.succeeded webhook
    → reverseSellerAccounting transitions row: completed → reversal_pending
    → StripeReversalWorker (every REVERSAL_CHECK_INTERVAL, default 1m)
      → calls ReverseTransfer on Stripe
      → success: row → reversed + persist stripe_reversal_id
      → 404 already-reversed (dead code until day 3): row → reversed + log
      → 404 resource_missing (dead code until day 3): row → permanently_failed
      → transient error: stay reversal_pending, bump retry_count,
        exponential backoff (base * 2^retry, capped at backoffMax)
      → retries exhausted: row → permanently_failed
    → buyer-facing refund completes immediately regardless of Stripe health

State machine enforcement:
  * New `SellerTransfer.TransitionStatus(tx, to, extras)` wraps every
    mutation: validates against AllowedTransferTransitions, guarded
    UPDATE with WHERE status=<from> (optimistic lock semantics), no
    RowsAffected = stale state / concurrent winner detected.
  * processSellerTransfers no longer mutates .Status in place —
    terminal status is decided before struct construction, so the
    row is Created with its final state.
  * transfer_retry.retryOne and admin RetryTransfer route through
    TransitionStatus. Legacy direct assignment removed.
  * TestNoDirectTransferStatusMutation greps the package for any
    `st.Status = "..."` / `t.Status = "..."` / GORM
    Model(&SellerTransfer{}).Update("status"...) outside the
    allowlist and fails if found. Verified by temporarily injecting
    a violation during development — test caught it as expected.

Configuration (v1.0.7 item B):
  * REVERSAL_WORKER_ENABLED=true (default)
  * REVERSAL_MAX_RETRIES=5 (default)
  * REVERSAL_CHECK_INTERVAL=1m (default)
  * REVERSAL_BACKOFF_BASE=1m (default)
  * REVERSAL_BACKOFF_MAX=1h (default, caps exponential growth)
  * .env.template documents TRANSFER_RETRY_* and REVERSAL_* env vars
    so an ops reader can grep them.

Interface change: TransferService.ReverseTransfer(ctx,
stripe_transfer_id, amount *int64, reason) (reversalID, error)
added. All four mocks extended (process_webhook, transfer_retry,
admin_transfer_handler, payment_flow integration). amount=nil means
full reversal; v1.0.7 always passes nil (partial reversal is future
scope per axis-1 P2).

Stripe 404 disambiguation (ErrTransferAlreadyReversed /
ErrTransferNotFound) is wired in the worker as dead code — the
sentinels are declared and the worker branches on them, but
StripeConnectService.ReverseTransfer doesn't yet emit them. Day 3
will parse stripe.Error.Code and populate the sentinels; no worker
change needed at that point. Keeping the handling skeleton in day 2
so the worker's branch shape doesn't change between days and the
tests can already cover all four paths against the mock.

Worker unit tests (9 cases, all green, sqlite :memory:):
  * happy path: reversal_pending → reversed + stripe_reversal_id set
  * already reversed (mock returns sentinel): → reversed + log
  * not found (mock returns sentinel): → permanently_failed + log
  * transient 503: retry_count++, next_retry_at set with backoff,
    stays reversal_pending
  * backoff capped at backoffMax (verified with base=1s, max=10s,
    retry_count=4 → capped at 10s not 16s)
  * max retries exhausted: → permanently_failed
  * legacy row with empty stripe_transfer_id: → permanently_failed,
    does not call Stripe
  * only picks up reversal_pending (skips all other statuses)
  * respects next_retry_at (future rows skipped)

Existing test updated: TestProcessRefundWebhook_SucceededFinalizesState
now asserts the row lands at reversal_pending with next_retry_at
set (worker's responsibility to drive to reversed), not reversed.

Worker wired in cmd/api/main.go alongside TransferRetryWorker,
sharing the same StripeConnectService instance. Shutdown path
registered for graceful stop.

Cut from day 2 scope (per agreed-upon discipline), landing in day 3:
  * Stripe 404 disambiguation implementation (parse error.Code)
  * End-to-end smoke probe (refund → reversal_pending → worker
    processes → reversed) against local Postgres + mock Stripe
  * Batch-size tuning / inter-batch sleep — batchLimit=20 today is
    safely under Stripe's 100 req/s default rate limit; revisit if
    observed load warrants

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
2026-04-17 15:34:29 +02:00
senke
712a0568e3 feat(workers): hourly cleanup of orphan tracks stuck in processing
Upload flow: POST creates a track row with `status=processing` and
writes the file at `file_path`. If the uploader process dies (OOM,
SIGKILL during deploy, disk wipe) between row-create and status-update,
the row stays in `processing` forever with a `file_path` that doesn't
exist. The library UI shows a ghost track the user can never play,
never reach, and only partially delete.

New worker:

  * `jobs/cleanup_orphan_tracks.go` — `CleanupOrphanTracks` queries
    tracks with `status=processing AND created_at < NOW()-1h`, stats
    the `file_path`, and flips the row to `status=failed` with
    `status_message = "orphan cleanup: file missing on disk after >1h
    in processing"`. Never deletes; never touches present files or
    rows already in another state. Safe to run repeatedly.
  * `ScheduleOrphanTracksCleanup(db, logger)` runs once at boot and
    then every hour thereafter. Wired in `cmd/api/main.go` right after
    route setup so restarts trigger an immediate scan.
  * Threshold exported as `OrphanTrackAgeThreshold` constant so tests
    and future tuning don't need to edit the worker.

Tests: 5 cases in `cleanup_orphan_tracks_test.go`:
  - `_FlipsStuckMissingFile` happy path
  - `_LeavesFilePresent` (slow uploads must not be failed)
  - `_LeavesRecent` (below threshold)
  - `_IgnoresAlreadyFailed` (idempotent)
  - `_NilDatabaseIsNoop` (safety)

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-16 14:57:24 +02:00
senke
9cdfc6d898 fix(backend): J4 — GDPR-compliant hard delete with Redis and ES cleanup
Closes TODO(HIGH-007). When the hard-delete worker anonymizes a user past
their recovery deadline, it now also cleans the user's residual data from
Redis and Elasticsearch, not just PostgreSQL. Without this, a user who
invoked their right to erasure would still appear in cached feed/profile
responses and in ES search results for up to the next reindex cycle.

Worker changes (internal/workers/hard_delete_worker.go):

  WithRedis / WithElasticsearch builder methods inject the clients. Both
  are optional: if either is nil (feature disabled or unreachable), the
  corresponding cleanup is skipped with a debug log and the worker keeps
  going. Partial progress beats panic.

  cleanRedisKeys uses SCAN with a cursor loop (COUNT 100), NEVER KEYS —
  KEYS would block the Redis server on multi-million-key deployments.
  Pattern is user:{id}:*. Transient SCAN errors retry up to 3 times with
  100ms * retry linear backoff; persistent errors return without panic.
  DEL errors on a batch are logged but non-fatal so subsequent batches
  are still attempted.

  cleanESDocs hits three indices independently:
    - users index: DELETE doc by _id (the user UUID); 404 treated as
      success (already gone = desired state)
    - tracks index: DeleteByQuery with a terms filter on _id, using the
      list of track IDs collected from PostgreSQL BEFORE anonymization
    - playlists index: same pattern as tracks
  A failure on one index does not prevent the others from being tried;
  the first error is returned so the caller can log.

  Track/playlist IDs are pre-collected (collectTrackIDs, collectPlaylistIDs)
  before the UPDATE anonymization runs, because the anonymization does NOT
  cascade (no DELETE on users), so tracks and playlists rows remain with
  their creator_id / user_id intact and resolvable at query time.

Wiring (cmd/api/main.go):

  The worker now receives cfg.RedisClient directly, and an optional ES
  client built from elasticsearch.LoadConfig() + NewClient. If ES is
  disabled or unreachable at startup, the worker logs a warning and
  proceeds with Redis-only cleanup.

Tests (internal/workers/hard_delete_worker_test.go, +260 lines):

  Pure-function unit tests:
    - TestUUIDsToStrings
    - TestEsIndexNameFor
  Nil-client safety tests:
    - TestCleanRedisKeys_NilClientIsNoop
    - TestCleanESDocs_NilClientIsNoop
  ES mock-server tests (httptest.Server mimicking /_doc and
  /_delete_by_query endpoints with valid ES 8.11 responses):
    - TestCleanESDocs_CallsAllThreeIndices — verifies the three expected
      HTTP calls land with the right paths and request bodies containing
      the provided UUIDs
    - TestCleanESDocs_SkipsEmptyIDLists — verifies no DeleteByQuery is
      issued when the ID lists are empty
  Redis testcontainer integration test (gated by VEZA_SKIP_INTEGRATION):
    - TestCleanRedisKeys_Integration — seeds 154 keys (4 fixed + 150 bulk
      to force the SCAN loop past a single batch) plus 4 unrelated keys
      from another user / global, runs cleanRedisKeys, asserts all 154
      own keys are gone and all 4 unrelated keys remain.

Verification:
  go build ./...                                                OK
  go vet ./...                                                  OK
  VEZA_SKIP_INTEGRATION=1 go test ./internal/workers/... short  OK
  go test ./internal/workers/ -run TestCleanRedisKeys_Integration
    → testcontainers spins redis:7-alpine, test passes in 1.34s

Out of J4 scope (noted for a follow-up):
  - No "activity" ES index exists in the codebase today (the audit plan
    mentioned it as a possible target). The three real indices with user
    data — users, tracks, playlists — are all now cleaned.
  - Track artist strings (free-form) may still contain the user's
    display name as a cached value in the tracks index after this
    cleanup. Actual user-owned tracks are deleted here, but if a third
    party's track referenced the removed user in its artist field, that
    reference is not touched. Strict RGPD on that edge case is a
    separate ticket.

Refs: AUDIT_REPORT.md §8.5, §10 P5, §12 item 1
2026-04-15 12:25:39 +02:00
senke
bec75f1435 ci: bump Go to 1.25 and fix goimports drift in 3 files
golangci-lint v2.11.4 requires Go >= 1.25. With the workflow on 1.24,
setup-go would silently trigger an in-job auto-toolchain download
(observed in run #71: 'go: github.com/golangci/golangci-lint/v2@v2.11.4
requires go >= 1.25.0; switching to go1.25.9') adding ~3 min to every
Backend (Go) run.

Bump setup-go to 1.25 in ci.yml, backend-ci.yml, go-fuzz.yml so the
prebuilt Go is already the right version.

Also lint-fix three files that golangci-lint's goimports checker
flagged — goimports sorts/groups imports and removes unused ones,
which plain gofmt leaves alone:
  - veza-backend-api/cmd/api/main.go
  - veza-backend-api/internal/api/handlers/chat_handlers.go
  - veza-backend-api/internal/handlers/auth_integration_test.go
2026-04-14 17:02:09 +02:00
senke
a1000ce7fb style(backend): gofmt -w on 85 files (whitespace only)
backend-ci.yml's `test -z "$(gofmt -l .)"` strict gate (added in
13c21ac11) failed on a backlog of unformatted files. None of the
85 files in this commit had been edited since the gate was added
because no push touched veza-backend-api/** in between, so the
gate never fired until today's CI fixes triggered it.

The diff is exclusively whitespace alignment in struct literals
and trailing-space comments. `go build ./...` and the full test
suite (with VEZA_SKIP_INTEGRATION=1 -short) pass identically.
2026-04-14 12:22:14 +02:00
senke
2eff5a9b10 refactor(backend): split seed tool into domain-specific modules
Extract monolithic seed main.go into separate files per domain:
users, tracks, playlists, chat, analytics, marketplace, social,
content, live, moderation, notifications, and misc. Add config,
fake data helpers, and utility modules. Update Makefile targets.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-25 23:35:07 +01:00
senke
6fad0ad68d fix: stabilize frontend — 98 TS errors to 0, align API endpoints, optimize bundle
- Fix 98 TypeScript errors across 37 files:
  - Service layer double-unwrapping (subscriptionService, distributionService, gearService)
  - Self-referencing variables in SearchPageResults
  - FeedView/ExploreView .posts→.items alignment
  - useQueueSync Zustand subscribe API
  - AdminAuditLogsView missing interface fields
  - Toast proxy type, interceptor type narrowing
  - 22 unused imports/variables removed
  - 5 storybook mock data fixes

- Align frontend API calls with backend endpoints:
  - Analytics: useAnalyticsView now calls /creator/analytics/dashboard (was /analytics)
  - Chat: chatService uses /conversations (was mock data), WS URL from backend token
  - Dashboard StatsSection: uses real /dashboard API data (was hardcoded zeros)
  - Settings: suppress 2FA toast error when endpoint unavailable

- Fix marketplace products: seed uses 'active' status (was 'published')
- Enrich seed: admin follows all creators (feed has content)

- Optimize bundle: vendor catch-all 793KB→318KB gzip (-60%)
  Split into vendor-charts, vendor-emoji, vendor-swagger, vendor-media, etc.

- Clean repo: remove ~100 orphaned screenshots, audit reports, logs from root

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-24 21:18:49 +01:00
senke
23487d8723 feat: backend — config, handlers, services, logging, migration
Update RabbitMQ config and eventbus. Improve secret filter logging.
Refine presence, cloud, and social services. Update announcement and
feature flag handlers. Add track_likes updated_at migration. Rebuild
seed binary.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-23 15:46:57 +01:00
senke
73eca4f6ad feat: backend, stream server & infra improvements
Backend (Go):
- Config: CORS, RabbitMQ, rate limit, main config updates
- Routes: core, distribution, tracks routing changes
- Middleware: rate limiter, endpoint limiter, response cache hardening
- Handlers: distribution, search handler fixes
- Workers: job worker improvements
- Upload validator and logging config additions
- New migrations: products, orders, performance indexes
- Seed tooling and data

Stream Server (Rust):
- Audio processing, config, routes, simple stream server updates
- Dockerfile improvements

Infrastructure:
- docker-compose.yml updates
- nginx-rtmp config changes
- Makefile improvements (config, dev, high, infra)
- Root package.json and lock file updates
- .env.example updates

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-03-18 11:36:06 +01:00
senke
9cd0da0046 fix(v0.12.6): apply all pentest remediations — 36 findings across 36 files
CRITICAL fixes:
- Race condition (TOCTOU) in payout/refund with SELECT FOR UPDATE (CRITICAL-001/002)
- IDOR on analytics endpoint — ownership check enforced (CRITICAL-003)
- CSWSH on all WebSocket endpoints — origin whitelist (CRITICAL-004)
- Mass assignment on user self-update — strip privileged fields (CRITICAL-005)

HIGH fixes:
- Path traversal in marketplace upload — UUID filenames (HIGH-001)
- IP spoofing — use Gin trusted proxy c.ClientIP() (HIGH-002)
- Popularity metrics (followers, likes) set to json:"-" (HIGH-003)
- bcrypt cost hardened to 12 everywhere (HIGH-004)
- Refresh token lock made mandatory (HIGH-005)
- Stream token replay prevention with access_count (HIGH-006)
- Subscription trial race condition fixed (HIGH-007)
- License download expiration check (HIGH-008)
- Webhook amount validation (HIGH-009)
- pprof endpoint removed from production (HIGH-010)

MEDIUM fixes:
- WebSocket message size limit 64KB (MEDIUM-010)
- HSTS header in nginx production (MEDIUM-001)
- CORS origin restricted in nginx-rtmp (MEDIUM-002)
- Docker alpine pinned to 3.21 (MEDIUM-003/004)
- Redis authentication enforced (MEDIUM-005)
- GDPR account deletion expanded (MEDIUM-006)
- .gitignore hardened (MEDIUM-007)

LOW/INFO fixes:
- GitHub Actions SHA pinning on all workflows (LOW-001)
- .env.example security documentation (INFO-001)
- Production CORS set to HTTPS (LOW-002)

All tests pass. Go and Rust compile clean.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-14 00:44:46 +01:00
senke
b47fa21331 feat(v0.12.8): documentation & API publique — rate limiting, scopes, OpenAPI
- API key rate limiting middleware (1000 reads/h, 200 writes/h par clé)
  — tracking séparé read/write, par API key ID (pas par IP)
  — headers X-RateLimit-Limit/Remaining/Reset sur chaque réponse
- API key scope enforcement middleware (read → GET, write → POST/PUT/DELETE)
  — admin scope permet tout, CSRF skip pour API key auth
- OpenAPI spec: ajout securityDefinition ApiKeyAuth (X-API-Key header)
- Swagger annotations: ajout ApiKeyAuth dans cmd/api/main.go
- Wiring dans router.go: middlewares appliqués sur tout le groupe /api/v1
- Tests: 10 tests (5 rate limiter + 5 scope enforcement), tous PASS

Backend existant déjà en place (pré-v0.12.8):
- Swagger UI (gin-swagger + frontend SwaggerUIDoc component)
- API key CRUD (create/list/delete + X-API-Key auth dans AuthMiddleware)
- Developer Dashboard frontend (API keys, webhooks, playground)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 18:44:09 +01:00
senke
24b29d229d fix(v0.12.6.1): remediate 2 CRITICAL + 10 HIGH + 1 MEDIUM pentest findings
Security fixes implemented:

CRITICAL:
- CRIT-001: IDOR on chat rooms — added IsRoomMember check before
  returning room data or message history (returns 404, not 403)
- CRIT-002: play_count/like_count exposed publicly — changed JSON
  tags to "-" so they are never serialized in API responses

HIGH:
- HIGH-001: TOCTOU race on marketplace downloads — transaction +
  SELECT FOR UPDATE on GetDownloadURL
- HIGH-002: HS256 in production docker-compose — replaced JWT_SECRET
  with JWT_PRIVATE_KEY_PATH / JWT_PUBLIC_KEY_PATH (RS256)
- HIGH-003: context.Background() bypass in user repository — full
  context propagation from handlers → services → repository (29 files)
- HIGH-004: Race condition on promo codes — SELECT FOR UPDATE
- HIGH-005: Race condition on exclusive licenses — SELECT FOR UPDATE
- HIGH-006: Rate limiter IP spoofing — SetTrustedProxies(nil) default
- HIGH-007: RGPD hard delete incomplete — added cleanup for sessions,
  settings, follows, notifications, audit_logs anonymization
- HIGH-008: RTMP callback auth weak — fail-closed when unconfigured,
  header-only (no query param), constant-time compare
- HIGH-009: Co-listening host hijack — UpdateHostState now takes *Conn
  and verifies IsHost before processing
- HIGH-010: Moderator self-strike — added issuedBy != userID check

MEDIUM:
- MEDIUM-001: Recovery codes used math/rand — replaced with crypto/rand
- MEDIUM-005: Stream token forgeable — resolved by HIGH-002 (RS256)

Updated REMEDIATION_MATRIX: 14 findings marked  CORRIGÉ.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-03-12 05:40:53 +01:00
senke
19fec9e40a feat(gdpr): v0.10.8 portabilité données - export ZIP async, suppression compte, hard delete cron
Some checks failed
Backend API CI / test-integration (push) Failing after 0s
Frontend CI / test (push) Failing after 0s
Storybook Audit / Build & audit Storybook (push) Failing after 0s
Backend API CI / test-unit (push) Failing after 0s
- Export: table data_exports, POST /me/export (202), GET /me/exports, messages+playback_history
- Notification email quand ZIP prêt, rate limit 3/jour
- Suppression: keep_public_tracks, anonymisation PII complète (users, user_profiles)
- HardDeleteWorker: final anonymization après 30 jours
- Frontend: POST export, checkbox keep_public_tracks
- MSW handlers pour Storybook
2026-03-10 13:57:04 +01:00
senke
7cd01e4216 feat(v0.10.5): Notifications complètes — F551-F555
Some checks failed
Backend API CI / test-unit (push) Failing after 0s
Backend API CI / test-integration (push) Failing after 0s
Frontend CI / test (push) Failing after 0s
Storybook Audit / Build & audit Storybook (push) Failing after 0s
F555: Backend pagination/filter GetNotifications (type, page, limit) + frontend pagination
F551: WebSocket real-time — backend inject chat hub, send on CreateNotification; frontend useChat invalidates
F553: Quiet hours — migration 132, CreateNotification skips push/WS, UI in PushPreferencesSection
F554: Notification grouping — migration 133, group_key/actor_count for like/comment, UI format
F552: Weekly digest — migration 134, NotificationDigestWorker, email template, prefs UI

Acceptance: no gamification notif; defaults unchanged; individual toggles for marketing
2026-03-10 10:02:21 +01:00
senke
2ed2bb9dcf v0.9.4 2026-03-05 23:03:43 +01:00
senke
6823e5a30d release(v0.902): Sentinel - PKCE OAuth, token encryption, redirect validation, CHAT_JWT_SECRET
Some checks failed
Backend API CI / test-unit (push) Failing after 0s
Backend API CI / test-integration (push) Failing after 0s
- PKCE (S256) in OAuth flow: code_verifier in oauth_states, code_challenge in auth URL
- CryptoService: AES-256-GCM encryption for OAuth provider tokens at rest
- OAuth redirect URL validated against OAUTH_ALLOWED_REDIRECT_DOMAINS
- CHAT_JWT_SECRET must differ from JWT_SECRET in production
- Migration script: cmd/tools/encrypt_oauth_tokens for existing tokens
- Fixes: VEZA-SEC-003, VEZA-SEC-004, VEZA-SEC-009, VEZA-SEC-010
2026-02-26 19:49:15 +01:00
senke
7692c4b8b9 feat(v0.802): frontend Cloud/Gear, MSW, docs, scope v0.803, archive
- Cloud: CloudFileVersions, CloudShareModal, versions/share in CloudView
- Gear: GearDocumentsTab, GearRepairsTab, warranty badge, initialTab
- MSW: cloud versions/share, gear documents/repairs, tags suggest
- Stories: CloudFileVersions, CloudShareModal, GearDetailModal variants
- gearService: listDocuments, uploadDocument, deleteDocument, listRepairs, createRepair, deleteRepair
- cloudService: listVersions, restoreVersion, shareFile, getSharedFile
- gear_warranty_notifier: 24h ticker, notifications for expiring warranty
- tag_handler_test: unit tests
- docs: API_REFERENCE, CHANGELOG, PROJECT_STATE, FEATURE_STATUS v0.802
- SCOPE_CONTROL, .cursorrules: scope v0.803
- archive: V0_802_RELEASE_SCOPE, RETROSPECTIVE_V0802
2026-02-25 14:00:58 +01:00
senke
8162d1b419 feat(cloud): GDPR data export and automatic backup cron 2026-02-25 13:35:16 +01:00
senke
b83a650279 feat(server): start TransferRetryWorker on boot (v0.701) 2026-02-23 23:32:23 +01:00
senke
22e5e21757 chore(audit 2.4, 2.5): supprimer code mort Education et cmd/modern-server
- Supprimer routes/handlers/core Education (backend)
- Supprimer handler MSW education, refs Sidebar/locales
- Basculer Makefile, make/dev.mk, scripts vers cmd/api/main.go
- Supprimer veza-backend-api/cmd/modern-server/
2026-02-15 14:39:40 +01:00
senke
9b5d2f7c47 fix(backend): replace panic/Fatal with graceful error when Redis down (audit 1.4, P0)
- Add early validation in Setup() returning error if Redis nil in production
- Remove panic/Fatal from routes_core.go and router.go applyCSRFProtection
- Handle Setup() error in cmd/api/main.go and cmd/modern-server/main.go
- Mark audit item 1.4 as done
2026-02-15 14:05:20 +01:00
senke
f0ba7de543 state-ownership: delete unused optimisticStoreUpdates.ts file
- Deleted apps/web/src/utils/optimisticStoreUpdates.ts (unused file)
- File was unused - no imports found in codebase
- Mutations already use React Query's onMutate pattern
- No TypeScript errors after deletion
- Actions 4.4.1.2 and 4.4.1.3 complete
2026-01-15 19:26:53 +01:00
senke
76d95ecfb4 incus deployement fully implemented, Makefile updated and make fmt ran 2026-01-13 19:47:57 +01:00
senke
f74b020d4b api-contracts: install openapi-generator-cli and create type generation script
- Completed Action 1.1.2.1: Installed @openapitools/openapi-generator-cli
- Completed Action 1.1.2.2: Created generate-types.sh script
- Added swagger annotations to cmd/modern-server/main.go
- Regenerated swagger.yaml with proper info section
- Successfully generated TypeScript types to src/types/generated/

The script generates types from veza-backend-api/openapi.yaml using
typescript-axios generator and creates barrel exports.
2026-01-11 16:30:43 +01:00
senke
9cd76a512f [LOGGING] Fix #10: Erreurs silencieuses - Ajout de logs avec contexte pour toutes les erreurs dans core/auth et core/track 2026-01-04 01:44:15 +01:00
senke
965633ef89 [BE-SVC-017] be-svc: Implement graceful shutdown
- Created ShutdownManager for coordinated graceful shutdown of all services
- Added Shutdowner interface for services that need graceful shutdown
- Implemented parallel shutdown with individual timeouts (10s per service)
- Added global shutdown timeout (30s total)
- Integrated shutdown manager in main.go for:
  - HTTP server shutdown
  - JobWorker cancellation
  - Config.Close() (DB, Redis, RabbitMQ)
  - Logger sync
  - Sentry flush
- Added comprehensive unit tests for shutdown manager
- Prevents registration of new services during shutdown

Phase: PHASE-6
Priority: P2
Progress: 113/267 (42.32%)
2025-12-24 17:03:11 +01:00
senke
96d9065066 [BE-DB-016] be-db: Add database backup strategy 2025-12-24 15:55:46 +01:00
senke
3d72d5ac3c stabilizing apps/web: FIRST BATCH 2025-12-17 08:07:35 -05:00
senke
094e85c7e3 stabilizing veza-backend-api: P1 & P2 2025-12-16 13:34:08 -05:00
senke
2dfde29f7d refonte: backend-api go first; phase 1 2025-12-12 21:34:34 -05:00
okinrev
87c6461900 report generation and future tasks selection 2025-12-08 19:57:54 +01:00
okinrev
0dc64c7638 chore(dev): add lab migration and run scripts 2025-12-07 14:27:51 +01:00
okinrev
1e4f7b1756 STABILISATION: phase 3–5 – API contract, tests & chat-server hardening 2025-12-06 17:21:59 +01:00
okinrev
f13e14f5b0 chore(backend): remove legacy migrations and main file 2025-12-06 11:50:22 +01:00
okinrev
b7955a680c P0: stabilisation backend/chat/stream + nouvelle base migrations v1
Backend Go:
- Remplacement complet des anciennes migrations par la base V1 alignée sur ORIGIN.
- Durcissement global du parsing JSON (BindAndValidateJSON + RespondWithAppError).
- Sécurisation de config.go, CORS, statuts de santé et monitoring.
- Implémentation des transactions P0 (RBAC, duplication de playlists, social toggles).
- Ajout d’un job worker structuré (emails, analytics, thumbnails) + tests associés.
- Nouvelle doc backend : AUDIT_CONFIG, BACKEND_CONFIG, AUTH_PASSWORD_RESET, JOB_WORKER_*.

Chat server (Rust):
- Refonte du pipeline JWT + sécurité, audit et rate limiting avancé.
- Implémentation complète du cycle de message (read receipts, delivered, edit/delete, typing).
- Nettoyage des panics, gestion d’erreurs robuste, logs structurés.
- Migrations chat alignées sur le schéma UUID et nouvelles features.

Stream server (Rust):
- Refonte du moteur de streaming (encoding pipeline + HLS) et des modules core.
- Transactions P0 pour les jobs et segments, garanties d’atomicité.
- Documentation détaillée de la pipeline (AUDIT_STREAM_*, DESIGN_STREAM_PIPELINE, TRANSACTIONS_P0_IMPLEMENTATION).

Documentation & audits:
- TRIAGE.md et AUDIT_STABILITY.md à jour avec l’état réel des 3 services.
- Cartographie complète des migrations et des transactions (DB_MIGRATIONS_*, DB_TRANSACTION_PLAN, AUDIT_DB_TRANSACTIONS, TRANSACTION_TESTS_PHASE3).
- Scripts de reset et de cleanup pour la lab DB et la V1.

Ce commit fige l’ensemble du travail de stabilisation P0 (UUID, backend, chat et stream) avant les phases suivantes (Coherence Guardian, WS hardening, etc.).
2025-12-06 11:14:38 +01:00
okinrev
2425c15b09 adding initial backend API (Go) 2025-12-03 20:29:37 +01:00