New ReconcileHyperswitchWorker sweeps for pending orders and refunds
whose terminal webhook never arrived. Pulls live PSP state for each
stuck row and synthesises a webhook payload to feed the normal
ProcessPaymentWebhook / ProcessRefundWebhook dispatcher. The existing
terminal-state guards on those handlers make reconciliation
idempotent against real webhooks — a late webhook after the reconciler
resolved the row is a no-op.
Three stuck-state classes covered:
1. Stuck orders (pending > 30m, non-empty payment_id) → GetPaymentStatus
+ synthetic payment.<status> webhook.
2. Stuck refunds with PSP id (pending > 30m, non-empty
hyperswitch_refund_id) → GetRefundStatus + synthetic
refund.<status> webhook (error_message forwarded).
3. Orphan refunds (pending > 5m, EMPTY hyperswitch_refund_id) →
mark failed + roll order back to completed + log ERROR. This
is the "we crashed between Phase 1 and Phase 2 of RefundOrder"
case, operator-attention territory.
New interfaces:
* marketplace.HyperswitchReadClient — read-only PSP surface the
worker depends on (GetPaymentStatus, GetRefundStatus). The
worker never calls CreatePayment / CreateRefund.
* hyperswitch.Client.GetRefund + RefundStatus struct added.
* hyperswitch.Provider gains GetRefundStatus + GetPaymentStatus
pass-throughs that satisfy the marketplace interface.
Configuration (all env-var tunable with sensible defaults):
* RECONCILE_WORKER_ENABLED=true
* RECONCILE_INTERVAL=1h (ops can drop to 5m during incident
response without a code change)
* RECONCILE_ORDER_STUCK_AFTER=30m
* RECONCILE_REFUND_STUCK_AFTER=30m
* RECONCILE_REFUND_ORPHAN_AFTER=5m (shorter because "app crashed"
is a different signal from "network hiccup")
Operational details:
* Batch limit 50 rows per phase per tick so a 10k-row backlog
doesn't hammer Hyperswitch. Next tick picks up the rest.
* PSP read errors leave the row untouched — next tick retries.
Reconciliation is always safe to replay.
* Structured log on every action so `grep reconcile` tells the
ops story: which order/refund got synced, against what status,
how long it was stuck.
* Worker wired in cmd/api/main.go, gated on
HyperswitchEnabled + HyperswitchAPIKey. Graceful shutdown
registered.
* RunOnce exposed as public API for ad-hoc ops trigger during
incident response.
Tests — 10 cases, all green (sqlite :memory:):
* TestReconcile_StuckOrder_SyncsViaSyntheticWebhook
* TestReconcile_RecentOrder_NotTouched
* TestReconcile_CompletedOrder_NotTouched
* TestReconcile_OrderWithEmptyPaymentID_NotTouched
* TestReconcile_PSPReadErrorLeavesRowIntact
* TestReconcile_OrphanRefund_AutoFails_OrderRollsBack
* TestReconcile_RecentOrphanRefund_NotTouched
* TestReconcile_StuckRefund_SyncsViaSyntheticWebhook
* TestReconcile_StuckRefund_FailureStatus_PassesErrorMessage
* TestReconcile_AllTerminalStates_NoOp
CHANGELOG v1.0.7-rc1 updated with the full item C section between D
and the existing E block, matching the order convention (ship order:
A → D → B → E → C, CHANGELOG order follows).
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Every POST /webhooks/hyperswitch delivery now writes a row to
`hyperswitch_webhook_log` regardless of signature-valid or
processing outcome. Captures both legitimate deliveries and attack
probes — a forensics query now has the actual bytes to read, not
just a "webhook rejected" log line. Disputes (axis-1 P1.6) ride
along: the log captures dispute.* events alongside payment and
refund events, ready for when disputes get a handler.
Table shape (migration 984):
* payload TEXT — readable in psql, invalid UTF-8 replaced with
empty (forensics value is in headers + ip + timing for those
attacks, not the binary body).
* signature_valid BOOLEAN + partial index for "show me attack
attempts" being instantaneous.
* processing_result TEXT — 'ok' / 'error: <msg>' /
'signature_invalid' / 'skipped'. Matches the P1.5 action
semantic exactly.
* source_ip, user_agent, request_id — forensics essentials.
request_id is captured from Hyperswitch's X-Request-Id header
when present, else a server-side UUID so every row correlates
to VEZA's structured logs.
* event_type — best-effort extract from the JSON payload, NULL
on malformed input.
Hardening:
* 64KB body cap via io.LimitReader rejects oversize with 413
before any INSERT — prevents log-spam DoS.
* Single INSERT per delivery with final state; no two-phase
update race on signature-failure path. signature_invalid and
processing-error rows both land.
* DB persistence failures are logged but swallowed — the
endpoint's contract is to ack Hyperswitch, not perfect audit.
Retention sweep:
* CleanupHyperswitchWebhookLog in internal/jobs, daily tick,
batched DELETE (10k rows + 100ms pause) so a large backlog
doesn't lock the table.
* HYPERSWITCH_WEBHOOK_LOG_RETENTION_DAYS (default 90).
* Same goroutine-ticker pattern as ScheduleOrphanTracksCleanup.
* Wired in cmd/api/main.go alongside the existing cleanup jobs.
Tests: 5 in webhook_log_test.go (persistence, request_id auto-gen,
invalid-JSON leaves event_type empty, invalid-signature capture,
extractEventType 5 sub-cases) + 4 in cleanup_hyperswitch_webhook_
log_test.go (deletes-older-than, noop, default-on-zero,
context-cancel). Migration 984 applied cleanly to local Postgres;
all indexes present.
Also (v107-plan.md):
* Item G acceptance gains an explicit Idempotency-Key threading
requirement with an empty-key loud-fail test — "literally
copy-paste D's 4-line test skeleton". Closes the risk that
item G silently reopens the HTTP-retry duplicate-charge
exposure D closed.
Out of scope for E (noted in CHANGELOG):
* Rate limit on the endpoint — pre-existing middleware covers
it at the router level; adding a per-endpoint limit is
separate scope.
* Readable-payload SQL view — deferred, the TEXT column is
already human-readable; a convenience view is a nice-to-have
not a ship-blocker.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Day-2 cut of item B: the reversal path becomes async. Pre-v1.0.7
(and v1.0.7 day 1) the refund handler flipped seller_transfers
straight from completed to reversed without ever calling Stripe —
the ledger said "reversed" while the seller's Stripe balance still
showed the original transfer as settled. The new flow:
refund.succeeded webhook
→ reverseSellerAccounting transitions row: completed → reversal_pending
→ StripeReversalWorker (every REVERSAL_CHECK_INTERVAL, default 1m)
→ calls ReverseTransfer on Stripe
→ success: row → reversed + persist stripe_reversal_id
→ 404 already-reversed (dead code until day 3): row → reversed + log
→ 404 resource_missing (dead code until day 3): row → permanently_failed
→ transient error: stay reversal_pending, bump retry_count,
exponential backoff (base * 2^retry, capped at backoffMax)
→ retries exhausted: row → permanently_failed
→ buyer-facing refund completes immediately regardless of Stripe health
State machine enforcement:
* New `SellerTransfer.TransitionStatus(tx, to, extras)` wraps every
mutation: validates against AllowedTransferTransitions, guarded
UPDATE with WHERE status=<from> (optimistic lock semantics), no
RowsAffected = stale state / concurrent winner detected.
* processSellerTransfers no longer mutates .Status in place —
terminal status is decided before struct construction, so the
row is Created with its final state.
* transfer_retry.retryOne and admin RetryTransfer route through
TransitionStatus. Legacy direct assignment removed.
* TestNoDirectTransferStatusMutation greps the package for any
`st.Status = "..."` / `t.Status = "..."` / GORM
Model(&SellerTransfer{}).Update("status"...) outside the
allowlist and fails if found. Verified by temporarily injecting
a violation during development — test caught it as expected.
Configuration (v1.0.7 item B):
* REVERSAL_WORKER_ENABLED=true (default)
* REVERSAL_MAX_RETRIES=5 (default)
* REVERSAL_CHECK_INTERVAL=1m (default)
* REVERSAL_BACKOFF_BASE=1m (default)
* REVERSAL_BACKOFF_MAX=1h (default, caps exponential growth)
* .env.template documents TRANSFER_RETRY_* and REVERSAL_* env vars
so an ops reader can grep them.
Interface change: TransferService.ReverseTransfer(ctx,
stripe_transfer_id, amount *int64, reason) (reversalID, error)
added. All four mocks extended (process_webhook, transfer_retry,
admin_transfer_handler, payment_flow integration). amount=nil means
full reversal; v1.0.7 always passes nil (partial reversal is future
scope per axis-1 P2).
Stripe 404 disambiguation (ErrTransferAlreadyReversed /
ErrTransferNotFound) is wired in the worker as dead code — the
sentinels are declared and the worker branches on them, but
StripeConnectService.ReverseTransfer doesn't yet emit them. Day 3
will parse stripe.Error.Code and populate the sentinels; no worker
change needed at that point. Keeping the handling skeleton in day 2
so the worker's branch shape doesn't change between days and the
tests can already cover all four paths against the mock.
Worker unit tests (9 cases, all green, sqlite :memory:):
* happy path: reversal_pending → reversed + stripe_reversal_id set
* already reversed (mock returns sentinel): → reversed + log
* not found (mock returns sentinel): → permanently_failed + log
* transient 503: retry_count++, next_retry_at set with backoff,
stays reversal_pending
* backoff capped at backoffMax (verified with base=1s, max=10s,
retry_count=4 → capped at 10s not 16s)
* max retries exhausted: → permanently_failed
* legacy row with empty stripe_transfer_id: → permanently_failed,
does not call Stripe
* only picks up reversal_pending (skips all other statuses)
* respects next_retry_at (future rows skipped)
Existing test updated: TestProcessRefundWebhook_SucceededFinalizesState
now asserts the row lands at reversal_pending with next_retry_at
set (worker's responsibility to drive to reversed), not reversed.
Worker wired in cmd/api/main.go alongside TransferRetryWorker,
sharing the same StripeConnectService instance. Shutdown path
registered for graceful stop.
Cut from day 2 scope (per agreed-upon discipline), landing in day 3:
* Stripe 404 disambiguation implementation (parse error.Code)
* End-to-end smoke probe (refund → reversal_pending → worker
processes → reversed) against local Postgres + mock Stripe
* Batch-size tuning / inter-batch sleep — batchLimit=20 today is
safely under Stripe's 100 req/s default rate limit; revisit if
observed load warrants
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Second item of the v1.0.6 backlog. The "front 500MB vs back 100MB" mismatch
flagged in the v1.0.5 audit turned out to be a misread — every live pair
was already aligned (tracks 100/100, cloud 500/500, video 500/500). The
real bug is architectural: the same byte values were duplicated in five
places (`track/service.go`, `handlers/upload.go:GetUploadLimits`,
`handlers/education_handler.go`, `upload-modal/constants.ts`, and
`CloudUploadModal.tsx`), drifting silently as soon as anyone tuned one.
Backend — one canonical spec at `internal/config/upload_limits.go`:
* `AudioLimit`, `ImageLimit`, `VideoLimit` expose `Bytes()`, `MB()`,
`HumanReadable()`, `AllowedMIMEs` — read lazily from env
(`MAX_UPLOAD_AUDIO_MB`, `MAX_UPLOAD_IMAGE_MB`, `MAX_UPLOAD_VIDEO_MB`)
with defaults 100/10/500.
* Invalid / negative / zero env values fall back to the default;
unreadable config can't turn the limit off silently.
* `track.Service.maxFileSize`, `track_upload_handler.go` error string,
`education_handler.go` video gate, and `upload.go:GetUploadLimits`
all read from this single source. Changing `MAX_UPLOAD_AUDIO_MB`
retunes every path at once.
Frontend — new `useUploadLimits()` hook:
* Fetches GET `/api/v1/upload/limits` via react-query (5 min stale,
30 min gc), one retry, then silently falls back to baked-in
defaults that match the backend compile-time defaults so the
dropzone stays responsive even without the network round-trip.
* `useUploadModal.ts` replaces its hardcoded `MAX_FILE_SIZE`
constant with `useUploadLimits().audio.maxBytes`, and surfaces
`audioMaxHuman` up to `UploadModal` → `UploadModalDropzone` so
the "max 100 MB" label and the "too large" error toast both
display the live value.
* `MAX_FILE_SIZE` constant kept as pure fallback for pre-network
render (documented as such).
Tests
* 4 Go tests on `config.UploadLimit` (defaults, env override, invalid
env → fallback, non-empty MIME lists).
* 4 Vitest tests on `useUploadLimits` (sync fallback on first render,
typed mapping from server payload, partial-payload falls back
per-category, network failure keeps fallback).
* Existing `trackUpload.integration.test.tsx` (11 cases) still green.
Out of scope (tracked for later):
* `CloudUploadModal.tsx` still has its own 500MB hardcoded — cloud
uploads accept audio+zip+midi with a different category semantic
than the three in `/upload/limits`. Unifying those deserves its
own design pass, not a drive-by.
* No runtime refactor of admin-provided custom category limits —
the current tri-category split covers every upload we ship today.
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two connected failure modes that silently break multi-pod deployments:
1. `RedisURL` has a struct-level default (`redis://<appDomain>:6379`)
that makes `c.RedisURL == ""` always false. An operator forgetting
to set `REDIS_URL` booted against a phantom host — every Redis call
would then fail, and `ChatPubSubService` would quietly fall back to
an in-memory map. On a single-pod deploy that "works"; on two pods
it silently partitions chat (messages on pod A never reach
subscribers on pod B).
2. The fallback itself was logged at `Warn` level, buried under normal
traffic. Operators only noticed when users reported stuck chats.
Changes:
* `config.go` (`ValidateForEnvironment` prod branch): new check that
`os.Getenv("REDIS_URL")` is non-empty. The struct field is left
alone (dev + test still use the default); we inspect the raw env so
the check is "explicitly set" rather than "non-empty after defaults".
* `chat_pubsub.go` `NewChatPubSubService`: if `redisClient == nil`,
emit an `ERROR` at construction time naming the failure mode
("cross-instance messages will be lost"). Same `Warn`→`Error`
promotion for the `Publish` fallback path — runbook-worthy.
Tests: new `chat_pubsub_test.go` with a `zaptest/observer` that asserts
the ERROR-level log fires exactly once when Redis is nil, plus an
in-memory fan-out happy-path so single-pod dev behaviour stays covered.
New `TestValidateForEnvironment_RedisURLRequiredInProduction` mirrors
the Hyperswitch guard test shape.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
With payments disabled, the marketplace flow still completes: orders are
created with status `CREATED`, the download URL is released, and no PSP
call is ever made. In other words: on a misconfigured prod instance, every
purchase is free. The only signal was a silent `hyperswitch_enabled=false`
at boot.
`ValidateForEnvironment()` (already wired at `NewConfig` line 513, before
the HTTP listener binds) now rejects `APP_ENV=production` with
`HyperswitchEnabled=false`. The error message names the failure mode
explicitly ("effectively giving away products") rather than a terse
"config invalid" — this is a revenue leak, not a typo.
Dev and staging are unaffected.
Tests: 3 new cases in `validation_test.go`
(`TestValidateForEnvironment_HyperswitchRequiredInProduction`) +
`TestLoadConfig_ProdValid` updated to set `HyperswitchEnabled: true`.
`TestValidateForEnvironment_ClamAVRequiredInProduction` fixture also
includes the new field so its "succeeds" sub-test still runs.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
backend-ci.yml's `test -z "$(gofmt -l .)"` strict gate (added in
13c21ac11) failed on a backlog of unformatted files. None of the
85 files in this commit had been edited since the gate was added
because no push touched veza-backend-api/** in between, so the
gate never fired until today's CI fixes triggered it.
The diff is exclusively whitespace alignment in struct literals
and trailing-space comments. `go build ./...` and the full test
suite (with VEZA_SKIP_INTEGRATION=1 -short) pass identically.
Three test failures triggered by changes in 73eca4f6a:
1. TestGetCORSOrigins_EnvironmentDefaults expected dev/staging origins
on :8080 but cors.go now generates :18080 (matching the actual
backend port from Dockerfile EXPOSE). Test was the stale side.
2. TestLoadConfig_ProdValid and TestValidateForEnvironment_ClamAVRequiredInProduction
built a Config literal missing fields that ValidateForEnvironment now
requires in production: ChatJWTSecret (must differ from JWTSecret),
OAuthEncryptionKey (≥32 bytes), JWTIssuer, JWTAudience. Also
explicitly set CLAMAV_REQUIRED=true so validation order is deterministic.
Update RabbitMQ config and eventbus. Improve secret filter logging.
Refine presence, cloud, and social services. Update announcement and
feature flag handlers. Add track_likes updated_at migration. Rebuild
seed binary.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- ORDER BY dynamiques : whitelist explicite, fallback created_at DESC
- Login/register soumis au rate limiter global
- VERSION sync + check CI
- Nettoyage références veza-chat-server
- Go 1.24 partout (Dockerfile, workflows)
- TODO/FIXME/HACK convertis en issues ou résolus
- PKCE (S256) in OAuth flow: code_verifier in oauth_states, code_challenge in auth URL
- CryptoService: AES-256-GCM encryption for OAuth provider tokens at rest
- OAuth redirect URL validated against OAUTH_ALLOWED_REDIRECT_DOMAINS
- CHAT_JWT_SECRET must differ from JWT_SECRET in production
- Migration script: cmd/tools/encrypt_oauth_tokens for existing tokens
- Fixes: VEZA-SEC-003, VEZA-SEC-004, VEZA-SEC-009, VEZA-SEC-010
- Add HLSEnabled and HLSStorageDir to backend config (HLS_STREAMING env)
- Register HLS serving routes (master.m3u8, quality playlist, segments)
behind HLSEnabled feature flag on existing track routes
- Add GetHLSStatus and TriggerHLSTranscode methods to StreamService
for stream server communication
- Update docker-compose (dev, staging, prod) with HLS env vars and
shared hls-data volume between backend and stream-server
- Stream callback already correctly updates stream_manifest_url
INT-03: Tests for health endpoint, auth flow, track upload auth,
webhook HTTPS-only, and rate limit headers. Build-tagged
'integration' to avoid running in regular test suite.
SEC-08: If HYPERSWITCH_ENABLED=true in production, startup now fails
unless HYPERSWITCH_WEBHOOK_SECRET is set. This prevents webhook
signature verification from being silently bypassed.
- Add FrontendURL to config (FRONTEND_URL or VITE_FRONTEND_URL)
- OAuth handlers use config instead of os.Getenv
- Update TODOS_AUDIT: mark UUID migration items as resolved
- Add ISSUES_P2_BACKLOG.md for GitHub issues
- Add ROUTES_ORPHANES.md for routes without UI
- Document FRONTEND_URL in .env.example
Replace NODE_ENV/APP_ENV bypass with DISABLE_RATE_LIMIT_FOR_TESTS=true.
Only test runners should set this. Prevents rate limiting bypass when
APP_ENV=development is mistakenly used in production.
Phase 1 audit - P1.6
Add validation in ValidateForEnvironment() to fail startup when
CLAMAV_REQUIRED=false in production. Virus scanning is mandatory
for all file uploads in production.
Phase 1 audit - P1.4
- 1.6: Replace hardcoded JWT secrets in chat server tests with runtime-generated
values (env TEST_JWT_SECRET or uuid-based fallback)
- 1.7: Add validateNoBypassFlagsInProduction() in config; fail startup if
BYPASS_CONTENT_CREATOR_ROLE or CSRF_DISABLED is set in production
Refs: AUDIT_TECHNIQUE_INTEGRAL_2026_02_15.md items 1.6, 1.7
- Add DATABASE_READ_URL config and InitReadReplica in database package
- Add ForRead() helper for read-only handler routing
- Update TrackService and TrackSearchService to use read replica for reads
- Document setup in DEPLOYMENT_GUIDE.md and .env.template
- Deleted apps/web/src/utils/optimisticStoreUpdates.ts (unused file)
- File was unused - no imports found in codebase
- Mutations already use React Query's onMutate pattern
- No TypeScript errors after deletion
- Actions 4.4.1.2 and 4.4.1.3 complete
- Conflit SQLx résolu (alignement sur version 0.7)
- build.rs configurés pour protoc dans chat/stream servers
- API Prometheus migrée vers HistogramOpts
- Traits Display/Debug corrigés (String au lieu de &dyn Display)
- API TOTP corrigée (totp-rs 5.4 avec Secret::Encoded)
- Layers tracing-subscriber corrigés (types conditionnels)
- VezaError/VezaResult exportés dans lib.rs
- TransactionProvider simplifié (retour void au lieu de Box<dyn>)
- VezaConfig contraint Serialize pour to_json()
Files: veza-common/Cargo.toml, veza-common/src/*.rs, veza-chat-server/Cargo.toml, veza-chat-server/build.rs, veza-stream-server/Cargo.toml, veza-stream-server/build.rs, VEZA_ROADMAP.json
Hours: 8 estimated, 3 actual