Day-2 cut of item B: the reversal path becomes async. Pre-v1.0.7
(and v1.0.7 day 1) the refund handler flipped seller_transfers
straight from completed to reversed without ever calling Stripe —
the ledger said "reversed" while the seller's Stripe balance still
showed the original transfer as settled. The new flow:
refund.succeeded webhook
→ reverseSellerAccounting transitions row: completed → reversal_pending
→ StripeReversalWorker (every REVERSAL_CHECK_INTERVAL, default 1m)
→ calls ReverseTransfer on Stripe
→ success: row → reversed + persist stripe_reversal_id
→ 404 already-reversed (dead code until day 3): row → reversed + log
→ 404 resource_missing (dead code until day 3): row → permanently_failed
→ transient error: stay reversal_pending, bump retry_count,
exponential backoff (base * 2^retry, capped at backoffMax)
→ retries exhausted: row → permanently_failed
→ buyer-facing refund completes immediately regardless of Stripe health
State machine enforcement:
* New `SellerTransfer.TransitionStatus(tx, to, extras)` wraps every
mutation: validates against AllowedTransferTransitions, guarded
UPDATE with WHERE status=<from> (optimistic lock semantics), no
RowsAffected = stale state / concurrent winner detected.
* processSellerTransfers no longer mutates .Status in place —
terminal status is decided before struct construction, so the
row is Created with its final state.
* transfer_retry.retryOne and admin RetryTransfer route through
TransitionStatus. Legacy direct assignment removed.
* TestNoDirectTransferStatusMutation greps the package for any
`st.Status = "..."` / `t.Status = "..."` / GORM
Model(&SellerTransfer{}).Update("status"...) outside the
allowlist and fails if found. Verified by temporarily injecting
a violation during development — test caught it as expected.
Configuration (v1.0.7 item B):
* REVERSAL_WORKER_ENABLED=true (default)
* REVERSAL_MAX_RETRIES=5 (default)
* REVERSAL_CHECK_INTERVAL=1m (default)
* REVERSAL_BACKOFF_BASE=1m (default)
* REVERSAL_BACKOFF_MAX=1h (default, caps exponential growth)
* .env.template documents TRANSFER_RETRY_* and REVERSAL_* env vars
so an ops reader can grep them.
Interface change: TransferService.ReverseTransfer(ctx,
stripe_transfer_id, amount *int64, reason) (reversalID, error)
added. All four mocks extended (process_webhook, transfer_retry,
admin_transfer_handler, payment_flow integration). amount=nil means
full reversal; v1.0.7 always passes nil (partial reversal is future
scope per axis-1 P2).
Stripe 404 disambiguation (ErrTransferAlreadyReversed /
ErrTransferNotFound) is wired in the worker as dead code — the
sentinels are declared and the worker branches on them, but
StripeConnectService.ReverseTransfer doesn't yet emit them. Day 3
will parse stripe.Error.Code and populate the sentinels; no worker
change needed at that point. Keeping the handling skeleton in day 2
so the worker's branch shape doesn't change between days and the
tests can already cover all four paths against the mock.
Worker unit tests (9 cases, all green, sqlite :memory:):
* happy path: reversal_pending → reversed + stripe_reversal_id set
* already reversed (mock returns sentinel): → reversed + log
* not found (mock returns sentinel): → permanently_failed + log
* transient 503: retry_count++, next_retry_at set with backoff,
stays reversal_pending
* backoff capped at backoffMax (verified with base=1s, max=10s,
retry_count=4 → capped at 10s not 16s)
* max retries exhausted: → permanently_failed
* legacy row with empty stripe_transfer_id: → permanently_failed,
does not call Stripe
* only picks up reversal_pending (skips all other statuses)
* respects next_retry_at (future rows skipped)
Existing test updated: TestProcessRefundWebhook_SucceededFinalizesState
now asserts the row lands at reversal_pending with next_retry_at
set (worker's responsibility to drive to reversed), not reversed.
Worker wired in cmd/api/main.go alongside TransferRetryWorker,
sharing the same StripeConnectService instance. Shutdown path
registered for graceful stop.
Cut from day 2 scope (per agreed-upon discipline), landing in day 3:
* Stripe 404 disambiguation implementation (parse error.Code)
* End-to-end smoke probe (refund → reversal_pending → worker
processes → reversed) against local Postgres + mock Stripe
* Batch-size tuning / inter-batch sleep — batchLimit=20 today is
safely under Stripe's 100 req/s default rate limit; revisit if
observed load warrants
Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Two connected failure modes that silently break multi-pod deployments:
1. `RedisURL` has a struct-level default (`redis://<appDomain>:6379`)
that makes `c.RedisURL == ""` always false. An operator forgetting
to set `REDIS_URL` booted against a phantom host — every Redis call
would then fail, and `ChatPubSubService` would quietly fall back to
an in-memory map. On a single-pod deploy that "works"; on two pods
it silently partitions chat (messages on pod A never reach
subscribers on pod B).
2. The fallback itself was logged at `Warn` level, buried under normal
traffic. Operators only noticed when users reported stuck chats.
Changes:
* `config.go` (`ValidateForEnvironment` prod branch): new check that
`os.Getenv("REDIS_URL")` is non-empty. The struct field is left
alone (dev + test still use the default); we inspect the raw env so
the check is "explicitly set" rather than "non-empty after defaults".
* `chat_pubsub.go` `NewChatPubSubService`: if `redisClient == nil`,
emit an `ERROR` at construction time naming the failure mode
("cross-instance messages will be lost"). Same `Warn`→`Error`
promotion for the `Publish` fallback path — runbook-worthy.
Tests: new `chat_pubsub_test.go` with a `zaptest/observer` that asserts
the ERROR-level log fires exactly once when Redis is nil, plus an
in-memory fan-out happy-path so single-pod dev behaviour stays covered.
New `TestValidateForEnvironment_RedisURLRequiredInProduction` mirrors
the Hyperswitch guard test shape.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
With payments disabled, the marketplace flow still completes: orders are
created with status `CREATED`, the download URL is released, and no PSP
call is ever made. In other words: on a misconfigured prod instance, every
purchase is free. The only signal was a silent `hyperswitch_enabled=false`
at boot.
`ValidateForEnvironment()` (already wired at `NewConfig` line 513, before
the HTTP listener binds) now rejects `APP_ENV=production` with
`HyperswitchEnabled=false`. The error message names the failure mode
explicitly ("effectively giving away products") rather than a terse
"config invalid" — this is a revenue leak, not a typo.
Dev and staging are unaffected.
Tests: 3 new cases in `validation_test.go`
(`TestValidateForEnvironment_HyperswitchRequiredInProduction`) +
`TestLoadConfig_ProdValid` updated to set `HyperswitchEnabled: true`.
`TestValidateForEnvironment_ClamAVRequiredInProduction` fixture also
includes the new field so its "succeeds" sub-test still runs.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
backend-ci.yml's `test -z "$(gofmt -l .)"` strict gate (added in
13c21ac11) failed on a backlog of unformatted files. None of the
85 files in this commit had been edited since the gate was added
because no push touched veza-backend-api/** in between, so the
gate never fired until today's CI fixes triggered it.
The diff is exclusively whitespace alignment in struct literals
and trailing-space comments. `go build ./...` and the full test
suite (with VEZA_SKIP_INTEGRATION=1 -short) pass identically.
Update RabbitMQ config and eventbus. Improve secret filter logging.
Refine presence, cloud, and social services. Update announcement and
feature flag handlers. Add track_likes updated_at migration. Rebuild
seed binary.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- PKCE (S256) in OAuth flow: code_verifier in oauth_states, code_challenge in auth URL
- CryptoService: AES-256-GCM encryption for OAuth provider tokens at rest
- OAuth redirect URL validated against OAUTH_ALLOWED_REDIRECT_DOMAINS
- CHAT_JWT_SECRET must differ from JWT_SECRET in production
- Migration script: cmd/tools/encrypt_oauth_tokens for existing tokens
- Fixes: VEZA-SEC-003, VEZA-SEC-004, VEZA-SEC-009, VEZA-SEC-010
- Add HLSEnabled and HLSStorageDir to backend config (HLS_STREAMING env)
- Register HLS serving routes (master.m3u8, quality playlist, segments)
behind HLSEnabled feature flag on existing track routes
- Add GetHLSStatus and TriggerHLSTranscode methods to StreamService
for stream server communication
- Update docker-compose (dev, staging, prod) with HLS env vars and
shared hls-data volume between backend and stream-server
- Stream callback already correctly updates stream_manifest_url
SEC-08: If HYPERSWITCH_ENABLED=true in production, startup now fails
unless HYPERSWITCH_WEBHOOK_SECRET is set. This prevents webhook
signature verification from being silently bypassed.
- Add FrontendURL to config (FRONTEND_URL or VITE_FRONTEND_URL)
- OAuth handlers use config instead of os.Getenv
- Update TODOS_AUDIT: mark UUID migration items as resolved
- Add ISSUES_P2_BACKLOG.md for GitHub issues
- Add ROUTES_ORPHANES.md for routes without UI
- Document FRONTEND_URL in .env.example
Add validation in ValidateForEnvironment() to fail startup when
CLAMAV_REQUIRED=false in production. Virus scanning is mandatory
for all file uploads in production.
Phase 1 audit - P1.4
- 1.6: Replace hardcoded JWT secrets in chat server tests with runtime-generated
values (env TEST_JWT_SECRET or uuid-based fallback)
- 1.7: Add validateNoBypassFlagsInProduction() in config; fail startup if
BYPASS_CONTENT_CREATOR_ROLE or CSRF_DISABLED is set in production
Refs: AUDIT_TECHNIQUE_INTEGRAL_2026_02_15.md items 1.6, 1.7
- Add DATABASE_READ_URL config and InitReadReplica in database package
- Add ForRead() helper for read-only handler routing
- Update TrackService and TrackSearchService to use read replica for reads
- Document setup in DEPLOYMENT_GUIDE.md and .env.template
- Deleted apps/web/src/utils/optimisticStoreUpdates.ts (unused file)
- File was unused - no imports found in codebase
- Mutations already use React Query's onMutate pattern
- No TypeScript errors after deletion
- Actions 4.4.1.2 and 4.4.1.3 complete
- Increase IP rate limit from 100 to 200 requests per minute
- Increase IP burst from 10 to 20
- Increase SimpleRateLimiter limit from 100 to 200
- Allows frontend to make multiple requests during initial load (CSRF, state hydration, etc.)
- Can be overridden via RATE_LIMIT_IP_PER_MINUTE and RATE_LIMIT_LIMIT env vars
- Created ValidateRequiredEnvironmentVariables function
- Validates required vars (JWT_SECRET, DATABASE_URL) in all environments
- Production-specific validations: CORS_ALLOWED_ORIGINS required, no wildcard, no DEBUG log level, RabbitMQ URL if enabled
- Integrated validation at startup in NewConfig() to fail-fast if required variables are missing
- Provides clear error messages for missing or invalid environment variables
Files modified:
- veza-backend-api/internal/config/config.go
- VEZA_COMPLETE_MVP_TODOLIST.json
- Added HTTP writer for centralized log collection (Loki-compatible)
- Created AggregationConfig with batch processing and flush intervals
- Integrated with existing zap logger using multi-core approach
- Added environment variables for configuration (LOG_AGGREGATION_ENABLED, LOG_AGGREGATION_ENDPOINT, etc.)
- Added unit tests for aggregation functionality
- Updated config.go to initialize logger with aggregation if enabled
Phase: PHASE-6
Priority: P2
Progress: 111/267 (41.57%)
- Added AWS SDK v2 dependency for S3 support
- Created S3StorageService implementing S3Service interface
- Support for AWS S3 and MinIO (S3-compatible storage)
- Added S3 configuration in config.go with environment variables
- Implemented upload, delete, presigned URL, and public URL methods
- Added unit tests for service validation and URL generation
- Service integrates with existing TrackStorageService
- Enhanced secrets management with environment-aware defaults
- Fixed RabbitMQ URL: no default credentials in production
- Added getRabbitMQURL with environment-aware logic
- Added ValidateRequiredSecrets to validate required secrets
- Added RequiredSecretKeys listing production-required secrets
- Added validation for RabbitMQ URL in production
- All secrets properly managed via environment variables
- No hardcoded secrets in production code