Conversation
|
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
📝 WalkthroughWalkthroughWidespread edits: mainly initializer reshaping, linter suppressions, proto/getter adoption, grouped type aliases, added optional response fields, test parallelization, scoped variable refactors, and one signature change (usagelimiter.handleResult). Control flow largely preserved. Changes
Sequence Diagram(s)(omitted) Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Focus review on:
Possibly related PRs
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (3 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 21
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (22)
go/apps/gw/server/session.go (1)
187-201: Misleading log message for all responses.Line 193 logs "gateway error" for all status codes, including successful 2xx responses. This could create confusion when debugging or analyzing logs, as successful requests would be labeled as errors.
Consider one of the following approaches:
- Log only for error status codes:
if status >= 400 { log.Printf(...) }- Use a status-neutral message:
log.Printf("gateway response: %d urn: %s", status, s.RequestID())- Remove the log if response tracking is handled elsewhere
Apply this diff to log only error responses:
func (s *Session) send(status int, body []byte) error { // Store for middleware use s.responseStatus = status s.responseBody = body s.w.WriteHeader(status) - log.Printf("gateway error: %d urn: %s", status, s.RequestID()) + if status >= 400 { + log.Printf("gateway error: %d urn: %s", status, s.RequestID()) + } _, err := s.w.Write(body) if err != nil { return fault.Wrap(err, fault.Internal("failed to send bytes")) } return nil }go/apps/krane/backend/docker/get_deployment.go (2)
40-48: Add default case to handle unrecognized container states.The switch statement only handles three container states (running, exited, created), but Docker containers can have additional states such as
paused,restarting,removing, anddead. Without a default case, unhandled states will silently use the zero value without any logging or error indication.Apply this diff to add explicit handling:
var status kranev1.DeploymentStatus switch c.State { case container.StateRunning: status = kranev1.DeploymentStatus_DEPLOYMENT_STATUS_RUNNING case container.StateExited: status = kranev1.DeploymentStatus_DEPLOYMENT_STATUS_TERMINATING case container.StateCreated: status = kranev1.DeploymentStatus_DEPLOYMENT_STATUS_PENDING + default: + d.logger.Warn("unhandled container state", "state", c.State, "container_id", c.ID) + status = kranev1.DeploymentStatus_DEPLOYMENT_STATUS_UNSPECIFIED }
54-54: Add bounds checking before accessing container ports to prevent potential panic.The Docker container port mappings are optional and can be created without them. The code accesses
c.Ports[0].PublicPortat lines 54 and 59 without checking if thePortsslice is non-empty. If a container has no port mappings, this causes a runtime panic.Compare with the Kubernetes backend at
kubernetes/get_deployment.go:65, which correctly uses:if len(service.Spec.Ports) > 0Add this check before accessing ports:
if len(c.Ports) == 0 { d.logger.Warn("container has no port mappings", "container_id", c.ID) continue }go/internal/services/usagelimiter/redis.go (2)
150-158: Fix: invalid range over int for worker spawn (won’t compile).Use a counted loop to start N workers.
- // Start replay workers + // Start replay workers replayWorkers := config.ReplayWorkers if replayWorkers <= 0 { replayWorkers = defaultReplayWorkers } - for range replayWorkers { - go s.replayRequests() - } + for i := 0; i < replayWorkers; i++ { + go s.replayRequests() + }
216-232: Guard int64 → int32 narrowing for Remaining.remaining may exceed int32; current cast silently truncates. Add bounds checks (or clamp) before casting.
func (s *counterService) handleResult(req UsageRequest, remaining int64, success bool) UsageResponse { + // Prevent narrowing overflow when converting to int32 + if remaining > math.MaxInt32 { + s.logger.Warn("remaining exceeds int32, clamping to MaxInt32", "remaining", remaining, "keyId", req.KeyId) + remaining = math.MaxInt32 + } else if remaining < math.MinInt32 { + s.logger.Warn("remaining below int32, clamping to MinInt32", "remaining", remaining, "keyId", req.KeyId) + remaining = math.MinInt32 + } if success { // decrement succeeded - buffer the change for async database sync s.replayBuffer.Buffer(CreditChange{ KeyID: req.KeyId, Cost: req.Cost, }) metrics.UsagelimiterDecisions.WithLabelValues("redis", "allowed").Inc() return UsageResponse{Valid: true, Remaining: int32(remaining)} //nolint: gosec } // Insufficient credits - return actual current count for accurate response metrics.UsagelimiterDecisions.WithLabelValues("redis", "denied").Inc() return UsageResponse{Valid: false, Remaining: int32(remaining)} //nolint: gosec }Also add:
// at top of file imports import "math"go/apps/krane/backend/kubernetes/create_deployment.go (1)
85-139: Standardize nolint directive formatting.The nolint directives use inconsistent formatting: some have
//nolint:exhaustruct(no space after colon at lines 85, 87, 101, 110, 120) while others use//nolint: exhaustruct(space after colon at lines 128, 139, 169, 175). While both formats work, consistency improves readability.Apply this pattern consistently throughout the file:
-//nolint: exhaustruct +//nolint:exhaustructgo/pkg/counter/redis_test.go (4)
227-248: Invalid “range over int” loops; use counted for-loops.
for i := range numWorkersandfor j := range operationsPerWorkerdon’t compile in Go. Replace with counted loops; keep id capture explicit.- for i := range numWorkers { + for i := 0; i < numWorkers; i++ { go func(id int) { defer wg.Done() - for j := range operationsPerWorker { + for j := 0; j < operationsPerWorker; j++ { // Alternate between increment and get if j%2 == 0 { _, err := ctr.Increment(ctx, key, 1) if err != nil { t.Errorf("worker %d: increment error: %v", id, err) return } } else { _, err := ctr.Get(ctx, key) if err != nil { t.Errorf("worker %d: get error: %v", id, err) return } } } }(i) }
518-529: Compile error and data race: “range int”, plus sharederrmutated across goroutines.
for range numWorkers/for range decrementsPerWorkerdon’t compile._, err = ctr.Decrement(...)writes to the outererrfrom multiple goroutines.- for range numWorkers { + for i := 0; i < numWorkers; i++ { go func() { defer wg.Done() - for range decrementsPerWorker { - _, err = ctr.Decrement(ctx, key, 2) - if err != nil { - t.Errorf("decrement error: %v", err) - return - } + for j := 0; j < decrementsPerWorker; j++ { + if _, err := ctr.Decrement(ctx, key, 2); err != nil { + t.Errorf("decrement error: %v", err) + return + } } }() }
621-637: Same issues here: invalid loop and sharederr; also redundant predecl.Use a counted loop and keep all variables local inside the goroutine.
- wg.Add(numWorkers) - for range numWorkers { + wg.Add(numWorkers) + for i := 0; i < numWorkers; i++ { go func() { defer wg.Done() - var existed, success bool - - _, existed, success, err = ctr.DecrementIfExists(ctx, key, 1) + _, existed, success, err := ctr.DecrementIfExists(ctx, key, 1) if err != nil { t.Errorf("DecrementIfExists error: %v", err) return } if existed && success { mu.Lock() successCount++ mu.Unlock() } }() }
804-826:sync.WaitGroupmisuse and assertions inside goroutines.
wg.Gois not a method onsync.WaitGroup; addwg.Add(numGoroutines)and usego func(){ defer wg.Done() }().- Avoid
require.*in goroutines (FailNow cannot be called from non-test goroutines). Uset.Errorfchecks instead.- Also fix the invalid
for range numGoroutines.- var wg sync.WaitGroup - startBarrier := make(chan struct{}) - - for range numGoroutines { - wg.Go(func() { + var wg sync.WaitGroup + startBarrier := make(chan struct{}) + wg.Add(numGoroutines) + for i := 0; i < numGoroutines; i++ { + go func() { + defer wg.Done() // Wait for all goroutines to be ready <-startBarrier - - var remaining int64 - var existed bool - var success bool - - remaining, existed, success, err = ctr.DecrementIfExists(ctx, key, decrementAmount) - require.NoError(t, err) - require.True(t, existed) - require.GreaterOrEqual(t, remaining, int64(0), "decrement should always return non-negative actual count") + remaining, existed, success, err := ctr.DecrementIfExists(ctx, key, decrementAmount) + if err != nil { + t.Errorf("DecrementIfExists error: %v", err) + return + } + if !existed { + t.Errorf("expected key to exist") + } + if remaining < 0 { + t.Errorf("decrement returned negative count: %d", remaining) + } // We don't need to check success here since this is a concurrent test // The final counter value verification is what matters _ = success - }) + }() }go/apps/gw/server/server.go (1)
190-195: Critical: middleware chain seeded with nil handler (will panic)handle must be initialized with the final handler before wrapping. Current code passes nil into middlewares, breaking the chain.
- // Apply middleware - var handle HandleFunc + // Apply middleware + var handle HandleFunc = handler // Reverse the middlewares to run in the desired order for i := len(middlewares) - 1; i >= 0; i-- { handle = middlewares[i](handle) }go/apps/api/routes/v2_keys_whoami/handler.go (1)
231-236: Guard against nil Identity when setting identity ratelimitsIf key has identity-scoped ratelimits but Identity wasn’t populated, this will panic. Add a nil check.
- if len(identityRatelimits) > 0 { - response.Identity.Ratelimits = &identityRatelimits - } + if len(identityRatelimits) > 0 && response.Identity != nil { + response.Identity.Ratelimits = &identityRatelimits + }go/apps/api/routes/v2_permissions_list_roles/handler.go (1)
58-60: Clamp limit to a safe range (prevent panic on negative/zero values)limit comes from user input. If limit <= 0, nextCursor := roles[limit] and roles[:limit] panic. Also, int→int32 cast was suppressed. Clamp to [1, 1000] (or your project’s standard), remove nolint, and keep the +1 fetch.
- cursor := ptr.SafeDeref(req.Cursor, "") - limit := ptr.SafeDeref(req.Limit, 100) + cursor := ptr.SafeDeref(req.Cursor, "") + limit := ptr.SafeDeref(req.Limit, 100) + if limit <= 0 { + limit = 1 + } + const maxLimit = 1000 + if limit > maxLimit { + limit = maxLimit + } @@ - //nolint:gosec - Limit: int32(limit) + 1, + Limit: int32(limit) + 1, @@ - hasMore := len(roles) > limit + hasMore := len(roles) > limit if hasMore { nextCursor = ptr.P(roles[limit].ID) roles = roles[:limit] }Also applies to: 78-80, 89-94
go/apps/api/integration/harness.go (1)
203-217: Fix loop and add health-check timeout (prevents hang; current code won’t compile).
for attempt := range maxAttemptsis invalid (cannot range over int).- Use a client with timeout to avoid indefinite
http.Geton startup failures.// Wait for server to start maxAttempts := 30 healthURL := fmt.Sprintf("http://%s/v2/liveness", ln.Addr().String()) - for attempt := range maxAttempts { - //nolint:gosec // Health check URL is constructed from controlled Docker container address - resp, err := http.Get(healthURL) + client := &http.Client{Timeout: 500 * time.Millisecond} + defer client.CloseIdleConnections() + for attempt := 0; attempt < maxAttempts; attempt++ { + //nolint:gosec // Health check URL is constructed from controlled Docker container address + resp, err := client.Get(healthURL) if err == nil { resp.Body.Close() if resp.StatusCode == http.StatusOK { h.t.Logf("API server %d started on %s", i, ln.Addr().String()) break } } if attempt == maxAttempts-1 { require.NoError(h.t, err, "API server %d failed to start", i) } time.Sleep(100 * time.Millisecond) }go/apps/api/routes/v2_ratelimit_list_overrides/handler.go (1)
95-103: Clamp user-supplied limit to prevent panics/DB abuse.
limitis taken from the request unchecked. Negative values can panic at Line 111 (overrides[limit]), and huge values can overload the DB. Clamp to a sane range and computeint32(limit+1)after clamping.Apply:
- limit := ptr.SafeDeref(req.Limit, 50) + limit := ptr.SafeDeref(req.Limit, 50) + // Clamp: docs suggest 10–100 typical; enforce 1–100 + if limit < 1 { + limit = 1 + } + if limit > 100 { + limit = 100 + } @@ - //nolint:gosec - Limit: int32(limit) + 1, + Limit: int32(limit + 1), // fetch one extra to compute hasMorego/apps/api/routes/v2_apis_list_keys/handler.go (2)
159-176: Clamp limit to avoid panics and excessive DB load.Unchecked
limitcan be negative (panic at slicing) or extremely large (DoS risk). Clamp to a bounded range beforelimit+1.- limit := ptr.SafeDeref(req.Limit, 100) + limit := ptr.SafeDeref(req.Limit, 100) + if limit < 1 { + limit = 1 + } + if limit > 100 { + limit = 100 + } @@ - Limit: int32(limit + 1), // nolint:gosec + Limit: int32(limit + 1),
351-382: Possible nil deref: response.Identity may be nil when setting Identity.Ratelimits.If
identityRatelimitsis non-empty butkeyData.Identity == nil,response.Identityis nil and this will panic.- if len(identityRatelimits) > 0 { - response.Identity.Ratelimits = &identityRatelimits - } + if len(identityRatelimits) > 0 && response.Identity != nil { + response.Identity.Ratelimits = &identityRatelimits + }go/apps/api/routes/v2_keys_get_key/handler.go (1)
245-247: Possible nil deref: response.Identity may be nil.If
identityRatelimitsis non-empty whilekeyData.Identity == nil, this panics.- if len(identityRatelimits) > 0 { - response.Identity.Ratelimits = &identityRatelimits - } + if len(identityRatelimits) > 0 && response.Identity != nil { + response.Identity.Ratelimits = &identityRatelimits + }go/apps/api/routes/v2_ratelimit_get_override/handler.go (1)
75-81: Don’t write to outererrinside the SWR loader.Same closure pattern as delete handler; use a local
errto avoid side effects.- var response db.FindRatelimitNamespaceRow - response, err = db.Query.FindRatelimitNamespace(ctx, h.DB.RO(), db.FindRatelimitNamespaceParams{ + response, err := db.Query.FindRatelimitNamespace(ctx, h.DB.RO(), db.FindRatelimitNamespaceParams{ WorkspaceID: auth.AuthorizedWorkspaceID, Namespace: req.Namespace, }) if err != nil { return db.FindRatelimitNamespace{}, err }go/apps/api/routes/v2_ratelimit_set_override/handler.go (2)
121-132: int64 → int32 downcasts without bounds checks (overflow risk).
req.Limit and req.Duration are int64; DB expects int32. Values > 2,147,483,647 will silently wrap, corrupting limits/durations.Apply these changes:
@@ - now := time.Now().UnixMilli() + now := time.Now().UnixMilli() + // Validate to prevent int64→int32 overflow and invalid values + if req.Limit < 0 || req.Limit > math.MaxInt32 { + return "", fault.New("invalid limit", + fault.Code(codes.App.Validation.InvalidInput.URN()), + fault.Public("`limit` must be between 0 and 2,147,483,647.")) + } + if req.Duration <= 0 || req.Duration > math.MaxInt32 { + return "", fault.New("invalid duration", + fault.Code(codes.App.Validation.InvalidInput.URN()), + fault.Public("`duration` must be between 1 and 2,147,483,647 ms.")) + } @@ - Limit: int32(req.Limit), // nolint:gosec - Duration: int32(req.Duration), //nolint:gosec + Limit: int32(req.Limit), + Duration: int32(req.Duration),Add import:
@@ "fmt" + "math" "net/http"Also applies to: 128-130
101-107: Critical bug: Code attempts INSERT on existing override ID, will fail 409 — use conditional UPDATE/INSERT pattern.The handler reuses existing override IDs when found (line ~116:
overrideID = override.ID) but then always calls the plainInsertRatelimitOverride, which is a simple INSERT without upsert semantics. When the override already exists, this will throw a duplicate key constraint violation.Replace the unconditional
InsertRatelimitOverridecall with conditional logic:
- If override found: call
UpdateRatelimitOverride(which safely updates only limit, duration, async, updated_at_m without touching created_at)- If override not found: call
InsertRatelimitOverrideThe
UpdateRatelimitOverridefunction exists and has correct semantics — it preserves created_at on updates.go/apps/api/routes/v2_keys_update_key/handler.go (1)
253-257: int64 → int32 downcasts without bounds checks (overflow/corruption risk).
- credits.remaining, credits.refill.amount, and ratelimits[].limit can exceed int32.
- Silent wrap will corrupt quotas and enforcement.
Apply guarded casts:
@@ - update.RemainingRequests = sql.NullInt32{ - Valid: true, - Int32: int32(credits.Remaining.MustGet()), // nolint:gosec - } + rem := credits.Remaining.MustGet() + if rem < 0 || rem > math.MaxInt32 { + return fault.New("invalid credits.remaining", + fault.Code(codes.App.Validation.InvalidInput.URN()), + fault.Public("`credits.remaining` must be between 0 and 2,147,483,647.")) + } + update.RemainingRequests = sql.NullInt32{Valid: true, Int32: int32(rem)} @@ - update.RefillAmount = sql.NullInt32{ - Valid: true, - Int32: int32(refill.Amount), // nolint:gosec - } + amt := refill.Amount + if amt < 0 || amt > math.MaxInt32 { + return fault.New("invalid credits.refill.amount", + fault.Code(codes.App.Validation.InvalidInput.URN()), + fault.Public("`credits.refill.amount` must be between 0 and 2,147,483,647.")) + } + update.RefillAmount = sql.NullInt32{Valid: true, Int32: int32(amt)} @@ - ratelimitsToInsert = append(ratelimitsToInsert, db.InsertKeyRatelimitParams{ + // Validate limit bounds + if newRL.Limit < 0 || newRL.Limit > math.MaxInt32 { + return fault.New("invalid ratelimit.limit", + fault.Code(codes.App.Validation.InvalidInput.URN()), + fault.Public("Each ratelimit.limit must be between 0 and 2,147,483,647.")) + } + ratelimitsToInsert = append(ratelimitsToInsert, db.InsertKeyRatelimitParams{ ID: rlID, @@ - Limit: int32(newRL.Limit), // nolint:gosec + Limit: int32(newRL.Limit),Add import:
@@ "fmt" + "math" "net/http"Also applies to: 268-273, 371-375
|
Thank you for following the naming conventions for pull request titles! 🙏 |
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
.github/pull_request_template.md(2 hunks).github/workflows/autofix.ci.yaml(1 hunks)
🧰 Additional context used
🪛 LanguageTool
.github/pull_request_template.md
[grammar] ~7-~7: Ensure spelling is correct
Context: ...sed to tracking purposes and also helps use understand why this PR exists_ <!-- If...
(QB_NEW_EN_ORTHOGRAPHY_ERROR_IDS_1)
🪛 markdownlint-cli2 (0.18.1)
.github/pull_request_template.md
7-7: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (2)
- GitHub Check: Test Go API Local / Test
- GitHub Check: Build / Build
🔇 Additional comments (2)
.github/pull_request_template.md (1)
41-41: Checklist addition aligns with golang formatting enforcement.The new checklist item requiring
make fmton the/godirectory is appropriate and enforces consistency with the CI workflow additions in.github/workflows/autofix.ci.yaml..github/workflows/autofix.ci.yaml (1)
24-31: Workflow changes effectively enforce Go code formatting in CI.The addition of the "Format Go code" step (lines 29–31) is well-positioned within the CI pipeline and directly supports the PR objective. The step:
- Executes after Go is set up but before installing Node dependencies, ensuring proper sequencing
- Correctly specifies the working directory as
./go- Aligns with the new PR template checklist item requiring developers to run
make fmtlocallyThe step name simplification on line 24 is also appropriate.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
go/apps/ctrl/workflows/certificate/process_challenge_handler.go (2)
52-57: Missing tenant/ownership check for domain.You resolve by domain only and never verify
dom.WorkspaceID == req.GetWorkspaceId(). This allows cross‑workspace challenge processing. Add an authorization check and fail with a terminal error (403) when mismatched.dom, err := restate.Run(ctx, func(stepCtx restate.RunContext) (db.Domain, error) { return db.Query.FindDomainByDomain(stepCtx, s.db.RO(), req.GetDomain()) }, restate.WithName("resolving domain")) if err != nil { return nil, err } + if dom.WorkspaceID != req.GetWorkspaceId() { + return &hydrav1.ProcessChallengeResponse{ + CertificateId: "", + Status: "failed", + }, nil // or: return nil, restate.TerminalError(errors.New("forbidden"), 403) + }
72-83: TerminalError uses outererr(likely nil) and wrong return type; make error explicit and avoid unused typed result.The closure references outer
err, which is nil here, yieldingTerminalError(nil, 500). Also returning*certificate.Resourcewhile discarding it is misleading.- _, err = restate.Run(ctx, func(stepCtx restate.RunContext) (*certificate.Resource, error) { + _, err = restate.Run(ctx, func(stepCtx restate.RunContext) (restate.Void, error) { // nolint: godox // TODO: Get ACME client for workspace // This requires implementing GetOrCreateUser from acme/user.go // and setting up challenge providers (HTTP-01, DNS-01) // For now, return error indicating this needs ACME client setup - return nil, restate.TerminalError( - err, - 500, - ) + return restate.Void{}, restate.TerminalError( + errors.New("ACME client setup not implemented"), // make cause explicit + 501, // Not Implemented + ) }, restate.WithName("setup acme client"))Add import:
import ( "database/sql" "time" + "errors"go/apps/ctrl/workflows/deploy/deploy_handler.go (2)
134-139: Compile-time bug:for i := range 300is invalid Go.Use a counted loop. Minimal fix:
- for i := range 300 { + for i := 0; i < 300; i++ { time.Sleep(time.Second) if i%10 == 0 { // Log every 10 seconds instead of every second w.logger.Info("polling deployment status", "deployment_id", deployment.ID, "iteration", i) }
214-231: Fix HTTP client hazards: missing timeout, body leak on non-200, unbounded ReadAll.The code has three real issues:
http.DefaultClient.Get()has no timeout (can hang indefinitely)- On non-200 status,
continueis called without draining the response body, preventing TCP connection reuse and wasting pooled connectionsio.ReadAll(resp.Body)reads unbounded data, risking memory exhaustion from large or malicious responsesApply the suggested fixes:
- Create an
http.Clientwith a reasonable timeout (10s)- Drain the body with
io.Copy(io.Discard, resp.Body)before closing on non-200- Wrap
resp.Bodyinio.LimitReaderwhen reading (5 MiB is reasonable for OpenAPI specs)- Move
defer resp.Body.Close()to after the status check to avoid defer on early error path
♻️ Duplicate comments (5)
.github/pull_request_template.md (1)
7-7: Grammar error and MD036 linting issue remain unresolved.This line still has the same two issues flagged in the past review:
- Grammar error: "used to tracking" should be "used for tracking"
- Markdown linting (MD036): Italicized text instead of plain paragraph text
Apply the suggested diff from the past review to fix both issues:
-_If there is not an issue for this, please create one first. This is used to tracking purposes and also helps us understand why this PR exists_ +If there is not an issue for this, please create one first. This is used for tracking purposes and also helps us understand why this PR existsgo/apps/ctrl/workflows/certificate/process_challenge_handler.go (3)
86-92: Good: now logging status‑update failures.The best‑effort status update no longer drops errors. Consider also logging
workspace_idanddomainfor traceability.- s.logger.Error("failed to update challenge status", "error", updateErr, "domain_id", dom.ID) + s.logger.Error("failed to update challenge status", + "error", updateErr, "domain_id", dom.ID, "workspace_id", dom.WorkspaceID, "domain", req.GetDomain())
101-116: Step 4 always fails; outererrreuse, discards existing cert; implement found/not‑found branches.
- Reuses outer
errinside closure.- Discards the certificate when found.
- Returns
TerminalError(err, 500)even whenerr == nilordb.IsNotFound(err), so the workflow always returns "failed".- cert, err := restate.Run(ctx, func(stepCtx restate.RunContext) (EncryptedCertificate, error) { - _, err = pdb.Query.FindCertificateByHostname(stepCtx, s.partitionDB.RO(), req.GetDomain()) - if err != nil && !db.IsNotFound(err) { - return EncryptedCertificate{}, err - } - // TODO: Implement certificate obtain/renew logic - // This requires the ACME client from step 3 - return EncryptedCertificate{}, restate.TerminalError( - err, - 500, - ) - }, restate.WithName("obtaining certificate")) + cert, err := restate.Run(ctx, func(stepCtx restate.RunContext) (EncryptedCertificate, error) { + existing, findErr := pdb.Query.FindCertificateByHostname(stepCtx, s.partitionDB.RO(), req.GetDomain()) + if findErr == nil { + // Return the existing certificate (map fields accordingly) + return EncryptedCertificate{ + Certificate: existing.Certificate, // TODO: adjust to actual field names + EncryptedPrivateKey: existing.EncryptedPrivateKey, // TODO: adjust + ExpiresAt: existing.ExpiresAt, // TODO: adjust + }, nil + } + if !db.IsNotFound(findErr) { + return EncryptedCertificate{}, findErr + } + // Not found: issue/renew via ACME (not yet implemented) + return EncryptedCertificate{}, restate.TerminalError( + errors.New("certificate issuance not implemented"), + 501, + ) + }, restate.WithName("obtaining certificate"))Note: If you return an existing cert, Step 5 should avoid duplicate inserts (prefer upsert or skip when reused). See next comment for ID propagation.
157-160: Success response must include actual CertificateId.Clients need a resolvable certificate reference. Capture/propagate the ID and set it here.
- return &hydrav1.ProcessChallengeResponse{ - CertificateId: "", - Status: "success", - }, nil + return &hydrav1.ProcessChallengeResponse{ + CertificateId: certID, // populate from persistence/lookup + Status: "success", + }, nilOne way to obtain
certIDwithout changing DB APIs:// After successful InsertCertificate: -}, restate.WithName("persisting certificate")) +}, restate.WithName("persisting certificate")) if err != nil { return nil, err } + +// Re‑fetch to get ID (or switch Insert to return it): +row, lookupErr := restate.Run(ctx, func(stepCtx restate.RunContext) (/* cert row type */, error) { + return pdb.Query.FindCertificateByHostname(stepCtx, s.partitionDB.RO(), req.GetDomain()) +}, restate.WithName("lookup certificate id")) +if lookupErr != nil { return nil, lookupErr } +certID := row.ID // adjust to actual fieldAlternatively, extend
EncryptedCertificatewith anIDfield and return it from Step 4 when reusing an existing cert, or modifyInsertCertificateto return the inserted ID.go/apps/ctrl/workflows/deploy/deploy_handler.go (1)
105-113: Resource inconsistency: hardcoded CPU/memory in VM upsert diverge from CreateDeployment.Define once and reuse to keep DB metadata consistent with Krane. Also aligns MiB across both places.
@@ - _, err = restate.Run(ctx, func(stepCtx restate.RunContext) (restate.Void, error) { + // TODO: make configurable (request or deployment config) + cpuMillicores := uint32(512) + memorySizeMiB := uint64(512) + + _, err = restate.Run(ctx, func(stepCtx restate.RunContext) (restate.Void, error) { @@ - Replicas: 1, - CpuMillicores: 512, - MemorySizeMib: 512, + Replicas: 1, + CpuMillicores: cpuMillicores, + MemorySizeMib: memorySizeMiB, @@ - upsertParams := partitiondb.UpsertVMParams{ - ID: instance.GetId(), - DeploymentID: deployment.ID, - Address: sql.NullString{Valid: true, String: instance.GetAddress()}, - // TODO: Make sure configurable later - CpuMillicores: 1000, - MemoryMb: 1024, - Status: status, - } + upsertParams := partitiondb.UpsertVMParams{ + ID: instance.GetId(), + DeploymentID: deployment.ID, + Address: sql.NullString{Valid: true, String: instance.GetAddress()}, + CpuMillicores: int64(cpuMillicores), + MemoryMb: int64(memorySizeMiB), + Status: status, + }Note: Verify DB field naming “MemoryMb” represents MiB to avoid unit drift. If it’s MB (10^6), convert accordingly.
Also applies to: 173-181
🧹 Nitpick comments (2)
go/apps/ctrl/workflows/certificate/process_challenge_handler.go (1)
96-98: Failure payload: empty CertificateId is fine.Returning
CertificateId: ""on failure is acceptable. Optionally add a reason to logs for observability.go/apps/ctrl/workflows/deploy/deploy_handler.go (1)
149-153: Noisy/low-signal log: logging entire GetDeployment response.Prefer a concise summary (e.g., ready vs total, distinct statuses) to avoid log bloat.
Example:
- status: counts by state
- instances_ready/instances_total
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
.github/pull_request_template.md(2 hunks).github/workflows/autofix.ci.yaml(1 hunks)go/apps/ctrl/middleware/auth.go(3 hunks)go/apps/ctrl/workflows/certificate/process_challenge_handler.go(4 hunks)go/apps/ctrl/workflows/deploy/deploy_handler.go(12 hunks)go/apps/gw/router/acme_challenge/handler.go(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- go/apps/ctrl/middleware/auth.go
- .github/workflows/autofix.ci.yaml
🧰 Additional context used
🧬 Code graph analysis (3)
go/apps/ctrl/workflows/certificate/process_challenge_handler.go (2)
go/pkg/db/models_generated.go (1)
AcmeChallengesStatusFailed(22-22)go/gen/proto/hydra/v1/certificate.pb.go (3)
ProcessChallengeResponse(77-83)ProcessChallengeResponse(96-96)ProcessChallengeResponse(111-113)
go/apps/gw/router/acme_challenge/handler.go (2)
go/pkg/fault/wrap.go (3)
Wrap(25-67)Internal(75-89)Public(97-111)go/pkg/codes/unkey_application.go (1)
App(53-71)
go/apps/ctrl/workflows/deploy/deploy_handler.go (4)
go/pkg/db/deployment_find_by_id.sql_generated.go (1)
FindDeploymentByIdRow(35-51)go/gen/proto/ctrl/v1/deployment.pb.go (12)
CreateDeploymentRequest(136-155)CreateDeploymentRequest(168-168)CreateDeploymentRequest(183-185)Deployment(404-438)Deployment(451-451)Deployment(466-468)GetDeploymentResponse(360-365)GetDeploymentResponse(378-378)GetDeploymentResponse(393-395)GetDeploymentRequest(316-321)GetDeploymentRequest(334-334)GetDeploymentRequest(349-351)go/gen/proto/krane/v1/deployment.pb.go (13)
CreateDeploymentRequest(160-165)CreateDeploymentRequest(178-178)CreateDeploymentRequest(193-195)DeploymentRequest(76-86)DeploymentRequest(99-99)DeploymentRequest(114-116)GetDeploymentResponse(476-481)GetDeploymentResponse(494-494)GetDeploymentResponse(509-511)GetDeploymentRequest(424-430)GetDeploymentRequest(443-443)GetDeploymentRequest(458-460)DeploymentStatus_DEPLOYMENT_STATUS_RUNNING(29-29)go/gen/proto/partition/v1/gateway.pb.go (12)
Deployment(104-110)Deployment(123-123)Deployment(138-140)Project(156-162)Project(175-175)Project(190-192)AuthConfig(253-258)AuthConfig(271-271)AuthConfig(286-288)ValidationConfig(298-304)ValidationConfig(317-317)ValidationConfig(332-334)
🪛 markdownlint-cli2 (0.18.1)
.github/pull_request_template.md
7-7: Emphasis used instead of a heading
(MD036, no-emphasis-as-heading)
🔇 Additional comments (4)
.github/pull_request_template.md (1)
41-41: Good addition for enforcing Go formatting standards.The new checklist item aligns well with the PR's focus on Go linting and formatting consistency via
make fmt.go/apps/gw/router/acme_challenge/handler.go (1)
65-71: LGTM! Error handling implemented correctly.The change addresses the previous review comment by properly checking the error from
s.Plaininstead of discarding it. The implementation follows the project's error handling patterns consistently and uses appropriate error codes and messages.go/apps/ctrl/workflows/certificate/process_challenge_handler.go (1)
118-120: Failure response shape OK for workflow semantics.Returning a "failed" status with nil Go error keeps the workflow durable and terminal. Ensure failure cause is logged earlier.
If the API contract expects non‑2xx HTTP status when
Status=="failed", confirm that the transport layer maps this correctly.go/apps/ctrl/workflows/deploy/deploy_handler.go (1)
354-356: Good fix: ProjectID/WorkspaceID now populated in completion step.This resolves the previous tracking-data loss noted earlier. LGTM.
There was a problem hiding this comment.
Actionable comments posted: 0
🧹 Nitpick comments (1)
.github/workflows/autofix.ci.yaml (1)
29-30: Consider pinning golangci-lint to a specific version for reproducibility.Using
@latestmeans different versions of golangci-lint could be installed across runs, potentially leading to inconsistent linting results and CI failures that are hard to reproduce locally. Other actions in this workflow are pinned to specific versions/commits.Consider pinning to a stable release:
- - name: Install golangci-lint - run: go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@latest + - name: Install golangci-lint + run: go install github.com/golangci/golangci-lint/v2/cmd/golangci-lint@v1.59.0Alternatively, consider using the official golangci-lint action if maintaining the version becomes a maintenance burden.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
.github/workflows/autofix.ci.yaml(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: autofix
🔇 Additional comments (1)
.github/workflows/autofix.ci.yaml (1)
31-33: Verify thatmake fmtalone is sufficient for the linting workflow.The new formatting step runs
make fmtto address linter issues. Confirm that:
- The
make fmttarget in./go/Makefileproperly invokes golangci-lint with--fixto auto-correct issues, or- A separate
golangci-lint runstep should follow to verify remaining issues before commitAlso consider whether the workflow should fail the CI run if linting issues remain (rather than auto-fixing silently), to catch new violations introduced by PRs.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (7)
.github/workflows/autofix.ci.yaml (1)
12-14: Add missingneedsdependency forautofixjob.The
autofixjob referencesneeds.detect_changes.outputs.goin the conditional checks (lines 31, 38, 45, 53) but does not declaredetect_changesas a dependency. This will cause the workflow to fail at runtime with undefined context. Thelint_docsjob below correctly declares this dependency.Apply this diff to add the missing
needsdeclaration:autofix: if: github.event.pull_request.draft == false || github.event_name != 'pull_request' + needs: [detect_changes] runs-on: depot-ubuntu-24.04-4go/apps/ctrl/workflows/deploy/deploy_handler.go (2)
203-207: Syntax error:for i := range 300is invalid Go.Use a counted for-loop.
- for i := range 300 { + for i := 0; i < 300; i++ {
284-305: Resource leak:defer resp.Body.Close()inside a loop withcontinue.Defers run at function return, so multiple iterations keep bodies open. Close explicitly on all paths.
- defer resp.Body.Close() + // do not defer in loop; close explicitly on all paths if resp.StatusCode != http.StatusOK { w.logger.Warn("openapi endpoint returned non-200 status", "status", resp.StatusCode, "host_addr", instance.GetAddress(), "deployment_id", deployment.ID) + resp.Body.Close() continue } // Read the OpenAPI spec - var specBytes []byte - specBytes, err = io.ReadAll(resp.Body) + var specBytes []byte + specBytes, err = io.ReadAll(resp.Body) if err != nil { w.logger.Warn("failed to read OpenAPI spec response", "error", err, "host_addr", instance.GetAddress(), "deployment_id", deployment.ID) + resp.Body.Close() continue } w.logger.Info("openapi spec scraped successfully", "host_addr", instance.GetAddress(), "deployment_id", deployment.ID, "spec_size", len(specBytes)) - return base64.StdEncoding.EncodeToString(specBytes), nil + resp.Body.Close() + return base64.StdEncoding.EncodeToString(specBytes), nilgo/apps/api/routes/v2_keys_update_key/handler.go (4)
56-61: Defering emit() before error check risks panic if emit is nilMove the defer after the error check or guard it to avoid calling a nil func.
- auth, emit, err := h.Keys.GetRootKey(ctx, s) - defer emit() - if err != nil { - return err - } + auth, emit, err := h.Keys.GetRootKey(ctx, s) + if err != nil { + return err + } + // safe even if emit is nil + defer func() { + if emit != nil { + emit() + } + }()
253-257: Add bounds validation to prevent int32/int16 truncation and invalid day valuesCasting unvalidated integers to int32/int16 can silently truncate. Validate ranges first and enforce a sane monthly day range.
@@ - } else { - update.RemainingRequests = sql.NullInt32{ - Valid: true, - Int32: int32(credits.Remaining.MustGet()), // nolint:gosec - } - } + } else { + rem := credits.Remaining.MustGet() + if rem < 0 || rem > math.MaxInt32 { + return fault.New("remaining out of range", + fault.Code(codes.App.Validation.InvalidInput.URN()), + fault.Internal("credits.remaining exceeds int32"), + fault.Public("`credits.remaining` must be between 0 and 2147483647."), + ) + } + update.RemainingRequests = sql.NullInt32{ + Valid: true, + Int32: int32(rem), + } + } @@ - update.RefillAmount = sql.NullInt32{ - Valid: true, - Int32: int32(refill.Amount), // nolint:gosec - } + if refill.Amount < 0 || refill.Amount > math.MaxInt32 { + return fault.New("refill.amount out of range", + fault.Code(codes.App.Validation.InvalidInput.URN()), + fault.Internal("refill.amount exceeds int32"), + fault.Public("`credits.refill.amount` must be between 0 and 2147483647."), + ) + } + update.RefillAmount = sql.NullInt32{Valid: true, Int32: int32(refill.Amount)} @@ case openapi.UpdateKeyCreditsRefillIntervalMonthly: if refill.RefillDay == nil { return fault.New("missing refillDay", fault.Code(codes.App.Validation.InvalidInput.URN()), fault.Internal("refillDay required for monthly interval"), fault.Public("`refillDay` must be provided when the refill interval is `monthly`."), ) } + if *refill.RefillDay < 1 || *refill.RefillDay > 28 { + return fault.New("invalid refillDay", + fault.Code(codes.App.Validation.InvalidInput.URN()), + fault.Internal("refillDay must be 1..28 for monthly interval"), + fault.Public("`credits.refill.refillDay` must be between 1 and 28 for monthly interval."), + ) + } update.RefillDay = sql.NullInt16{ Valid: true, Int16: int16(*refill.RefillDay), // nolint:gosec } @@ - // For daily, refill_day should remain NULL + // For daily, refill_day should remain NULL update.RefillDay = sql.NullInt16{Valid: false, Int16: 0}Add import:
import ( + "math" )Also applies to: 269-273, 285-301
415-437: Deduplicate permissions to avoid unique-constraint conflicts and redundant workDuplicate slugs can produce repeated PermissionIDs and fail key‑permission inserts. Dedup inputs and the final permission IDs.
@@ - for _, requestedSlug := range *req.Permissions { + seenSlugs := make(map[string]struct{}, len(*req.Permissions)) + for _, requestedSlug := range *req.Permissions { + if _, ok := seenSlugs[requestedSlug]; ok { + continue + } + seenSlugs[requestedSlug] = struct{}{} existingPerm, exists := existingPermMap[requestedSlug] if exists { requestedPermissions = append(requestedPermissions, existingPerm) continue } @@ - for _, reqPerm := range requestedPermissions { + seenPermIDs := make(map[string]struct{}, len(requestedPermissions)) + for _, reqPerm := range requestedPermissions { + if _, ok := seenPermIDs[reqPerm.ID]; ok { + continue + } + seenPermIDs[reqPerm.ID] = struct{}{} permissionsToInsert = append(permissionsToInsert, db.InsertKeyPermissionParams{ KeyID: key.ID, PermissionID: reqPerm.ID, WorkspaceID: auth.AuthorizedWorkspaceID, CreatedAt: now, UpdatedAt: sql.NullInt64{Int64: now, Valid: true}, }) }Also applies to: 484-494
528-536: Deduplicate roles to prevent duplicate key-role rowsSimilar issue for roles: repeating the same name causes duplicate role IDs and insert conflicts.
@@ - requestedRoles := []db.FindRolesByNamesRow{} - for _, requestedName := range *req.Roles { + requestedRoles := []db.FindRolesByNamesRow{} + seenRoleNames := make(map[string]struct{}, len(*req.Roles)) + for _, requestedName := range *req.Roles { + if _, ok := seenRoleNames[requestedName]; ok { + continue + } + seenRoleNames[requestedName] = struct{}{} existingRole, exists := existingRoleMap[requestedName] if exists { requestedRoles = append(requestedRoles, existingRole) continue } @@ - rolesToInsert := []db.InsertKeyRoleParams{} - for _, reqRole := range requestedRoles { + rolesToInsert := []db.InsertKeyRoleParams{} + seenRoleIDs := make(map[string]struct{}, len(requestedRoles)) + for _, reqRole := range requestedRoles { + if _, ok := seenRoleIDs[reqRole.ID]; ok { + continue + } + seenRoleIDs[reqRole.ID] = struct{}{} rolesToInsert = append(rolesToInsert, db.InsertKeyRoleParams{ KeyID: key.ID, RoleID: reqRole.ID, WorkspaceID: auth.AuthorizedWorkspaceID, CreatedAtM: time.Now().UnixMilli(), }) }Also applies to: 551-571
♻️ Duplicate comments (1)
go/apps/ctrl/workflows/deploy/deploy_handler.go (1)
184-186: CPU/Memory inconsistency: deployment (512/512 MiB) vs VM upsert (1000/1024 MB).This desynchronizes metadata and was flagged earlier. Use a single source of truth and consistent units (MiB vs MB). If DB expects MiB, rename field or convert accordingly.
Minimal harmonization:
@@ - _, err = w.krane.CreateDeployment(stepCtx, connect.NewRequest(&kranev1.CreateDeploymentRequest{ + cpuMillicores := uint32(512) // TODO: derive from request/config + memorySizeMib := uint64(512) // TODO: derive from request/config + _, err = w.krane.CreateDeployment(stepCtx, connect.NewRequest(&kranev1.CreateDeploymentRequest{ Deployment: &kranev1.DeploymentRequest{ Namespace: hardcodedNamespace, DeploymentId: deployment.ID, Image: dockerImage, Replicas: 1, - CpuMillicores: 512, - MemorySizeMib: 512, + CpuMillicores: cpuMillicores, + MemorySizeMib: memorySizeMib, }, })) @@ - CpuMillicores: 1000, - MemoryMb: 1024, + CpuMillicores: int64(cpuMillicores), + MemoryMb: int64(memorySizeMib), // verify unit: MiB vs MBTo confirm field semantics in your repo:
#!/bin/bash # Inspect VM schema and comments for units rg -nC2 '\bMemoryMb\b|MemorySizeMib\b' go | sed -n '1,200p'Also applies to: 248-250
🧹 Nitpick comments (6)
go/apps/ctrl/run.go (1)
84-85: Prefer idiomatic short variable declaration.The explicit type declaration and separate assignment is less idiomatic than Go's standard
:=short declaration form. Sincestorage.NewS3already returns(Storage, error), the type is clear and inference is safe.Apply this diff to use the idiomatic pattern:
- var vaultStorage storage.Storage - vaultStorage, err = storage.NewS3(storage.S3Config{ + vaultStorage, err := storage.NewS3(storage.S3Config{ Logger: logger, S3URL: cfg.VaultS3.URL, S3Bucket: cfg.VaultS3.Bucket, S3AccessKeyID: cfg.VaultS3.AccessKeyID, S3AccessKeySecret: cfg.VaultS3.AccessKeySecret, })go/apps/ctrl/workflows/deploy/deploy_handler.go (2)
284-286: Add timeout and context to HTTP request.Using http.DefaultClient without a timeout risks hanging the workflow. Prefer a client with timeout and a request bound to stepCtx.
- resp, err = http.DefaultClient.Get(openapiURL) + httpClient := &http.Client{Timeout: 5 * time.Second} + reqOpenAPI, reqErr := http.NewRequestWithContext(stepCtx, http.MethodGet, openapiURL, nil) + if reqErr != nil { + w.logger.Warn("failed to create http request", "error", reqErr, "url", openapiURL) + continue + } + resp, err = httpClient.Do(reqOpenAPI)
218-221: Overly verbose status logging.Logging resp.Msg prints the full protobuf. Log useful fields (e.g., instance count and per-instance status) to reduce noise.
- w.logger.Info("deployment status", - "deployment_id", deployment.ID, - "status", resp.Msg, - ) + w.logger.Info("deployment status", + "deployment_id", deployment.ID, + "instances", len(resp.Msg.GetInstances()), + )go/apps/krane/backend/kubernetes/create_deployment.go (1)
86-223: Inconsistent nolint directive formatting.The nolint directives use inconsistent formatting throughout the file:
- Lines 86, 101, 110, 120, 215, 223 use
//nolint:exhaustruct(no space)- Lines 128, 139, 205 use
// nolint: exhaustruct(space before and after)- Lines 180, 186 use
// nolint: exhaustive(space before and after)While all formats are valid, consistent formatting improves readability and maintainability.
Apply this diff to standardize on the
//nolint:format (no space):- //nolint: exhaustruct + //nolint:exhaustruct &appsv1.StatefulSet{- //nolint: exhaustruct + //nolint:exhaustruct Spec: appsv1.StatefulSetSpec{- // nolint: exhaustive + //nolint:exhaustive Requests: corev1.ResourceList{- // nolint: exhaustive + //nolint:exhaustive Limits: corev1.ResourceList{- // nolint: exhaustruct + //nolint:exhaustruct if rollbackErr := k.clientset.CoreV1().Services(req.Msg.GetDeployment().GetNamespace()).Delete(ctx, service.Name, metav1.DeleteOptions{}); rollbackErr != nil {go/apps/api/routes/v2_keys_update_key/handler.go (1)
232-304: Credits handling is deeply nested; consider extracting a helper for readabilityExtract the credits tri‑state and validation logic into a helper (e.g., buildCreditsUpdate(req.Credits)) to reduce nesting and improve testability.
go/apps/api/routes/v2_keys_update_key/three_state_test.go (1)
132-132: Useanyinstead ofinterface{}for consistencyTests elsewhere use
map[string]any. Align here for consistency.- Meta: nullable.NewNullableWithValue(map[string]interface{}{"updated": "value"}), + Meta: nullable.NewNullableWithValue(map[string]any{"updated": "value"}), @@ - Meta: nullable.NewNullNullable[map[string]interface{}](), + Meta: nullable.NewNullNullable[map[string]any](), @@ - Meta: nullable.NewNullableWithValue(map[string]interface{}{"preserved": "value"}), + Meta: nullable.NewNullableWithValue(map[string]any{"preserved": "value"}),Also applies to: 147-147, 163-163
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
.github/workflows/autofix.ci.yaml(1 hunks)go/apps/api/routes/v2_keys_create_key/handler.go(4 hunks)go/apps/api/routes/v2_keys_reroll_key/200_test.go(1 hunks)go/apps/api/routes/v2_keys_update_key/200_test.go(2 hunks)go/apps/api/routes/v2_keys_update_key/handler.go(10 hunks)go/apps/api/routes/v2_keys_update_key/three_state_test.go(12 hunks)go/apps/ctrl/config.go(0 hunks)go/apps/ctrl/run.go(2 hunks)go/apps/ctrl/services/deployment/create_deployment_simple_test.go(0 hunks)go/apps/ctrl/workflows/deploy/deploy_handler.go(12 hunks)go/apps/ctrl/workflows/deploy/service.go(2 hunks)go/apps/gw/server/session.go(1 hunks)go/apps/krane/backend/docker/create_deployment.go(4 hunks)go/apps/krane/backend/docker/service.go(1 hunks)go/apps/krane/backend/kubernetes/create_deployment.go(7 hunks)go/cmd/ctrl/main.go(1 hunks)go/cmd/deploy/main.go(2 hunks)go/cmd/krane/main.go(2 hunks)
💤 Files with no reviewable changes (2)
- go/apps/ctrl/config.go
- go/apps/ctrl/services/deployment/create_deployment_simple_test.go
🚧 Files skipped from review as they are similar to previous changes (7)
- go/cmd/deploy/main.go
- go/apps/api/routes/v2_keys_update_key/200_test.go
- go/apps/ctrl/workflows/deploy/service.go
- go/cmd/krane/main.go
- go/cmd/ctrl/main.go
- go/apps/krane/backend/docker/service.go
- go/apps/api/routes/v2_keys_create_key/handler.go
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-21T09:45:47.560Z
Learnt from: ogzhanolguncu
PR: unkeyed/unkey#4107
File: go/pkg/clickhouse/key_verifications_test.go:20-20
Timestamp: 2025-10-21T09:45:47.560Z
Learning: In ClickHouse tests (e.g., go/pkg/clickhouse/key_verifications_test.go), parallel execution with t.Parallel() is safe when tests use workspaceID-based isolation. Each test generates a unique workspaceID (via uid.New(uid.WorkspacePrefix)), and all database operations are scoped to that workspaceID, providing logical isolation even when tests share a single ClickHouse instance.
Applied to files:
go/apps/api/routes/v2_keys_update_key/three_state_test.go
🧬 Code graph analysis (3)
go/apps/ctrl/run.go (2)
go/pkg/vault/storage/interface.go (1)
Storage(15-30)go/pkg/vault/storage/s3.go (2)
NewS3(33-70)S3Config(25-31)
go/apps/ctrl/workflows/deploy/deploy_handler.go (3)
go/pkg/db/models_generated.go (2)
Deployment(569-585)Project(714-727)go/gen/proto/krane/v1/deployment.pb.go (7)
DeploymentRequest(76-86)DeploymentRequest(99-99)DeploymentRequest(114-116)GetDeploymentResponse(476-481)GetDeploymentResponse(494-494)GetDeploymentResponse(509-511)DeploymentStatus_DEPLOYMENT_STATUS_RUNNING(29-29)go/gen/proto/partition/v1/gateway.pb.go (9)
Deployment(104-110)Deployment(123-123)Deployment(138-140)Project(156-162)Project(175-175)Project(190-192)ValidationConfig(298-304)ValidationConfig(317-317)ValidationConfig(332-334)
go/apps/api/routes/v2_keys_update_key/handler.go (5)
go/pkg/db/types/null_string.go (1)
NullString(10-10)go/pkg/db/identity_find_by_external_id.sql_generated.go (1)
FindIdentityByExternalIDParams(20-24)go/pkg/db/ratelimit_list_by_key_id.sql_generated.go (1)
ListRatelimitsByKeyIDRow(24-30)go/pkg/db/permission_find_by_slugs.sql_generated.go (1)
FindPermissionsBySlugsParams(17-20)go/pkg/db/role_find_by_names.sql_generated.go (2)
FindRolesByNamesRow(22-25)FindRolesByNamesParams(17-20)
🪛 actionlint (1.7.8)
.github/workflows/autofix.ci.yaml
31-31: property "detect_changes" is not defined in object type {}
(expression)
38-38: property "detect_changes" is not defined in object type {}
(expression)
45-45: property "detect_changes" is not defined in object type {}
(expression)
53-53: property "detect_changes" is not defined in object type {}
(expression)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Test API / API Test Local
- GitHub Check: Test Go API Local / Test
- GitHub Check: Build / Build
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (10)
.github/workflows/autofix.ci.yaml (1)
30-55: Go formatting and linting steps are properly gated.The new conditional Go-related steps (Setup Go, cache golangci-lint, install golangci-lint, and format Go code) are well-structured and only execute when Go changes are detected. This avoids unnecessary work and aligns with the PR objective to address Go linting issues.
Verify that once the
needsdependency is declared, the conditionals resolve correctly and the workflow executes as intended.go/apps/api/routes/v2_keys_reroll_key/200_test.go (1)
248-250: LGTM! Clean removal of redundant type cast.Since
ttlMsis already declared asint64on line 221, the explicitint64()cast was unnecessary. The removal improves code clarity without changing behavior.go/apps/gw/server/session.go (1)
96-97: LGTM! Explicit initialization addresses exhaustruct linter warnings.The explicit initialization of
bodyto an empty byte slice andwrittento false satisfies the exhaustruct linter without changing behavior. Whilefalseis the zero value for bool and[]byte{}is functionally equivalent tonilfor append operations (line 246), this explicit approach makes the initialization intent clear and resolves the linter warnings as intended.go/apps/ctrl/run.go (1)
267-267: LGTM: Using standard library constant.Good refactor replacing the string literal with
http.MethodPost. This improves maintainability and type safety.go/apps/krane/backend/docker/create_deployment.go (3)
50-76: LGTM: Appropriate exhaustruct suppressions for Docker SDK types.The
//nolint:exhaustructdirectives correctly suppress warnings for Docker SDK structs that have many optional fields. The accompanying comments clearly explain the rationale, making these reasonable suppressions that align with the PR's objective to address linter warnings pragmatically.
80-99: Proto accessor methods are correctly applied; Go 1.25 fully supports the syntax used.The code correctly uses
GetDeployment().GetReplicas()accessor methods, which follow proto best practices by providing nil-safety. Thefor i := range <integer>syntax on line 80 requires Go 1.22+, and the project specifies Go 1.25 in go.mod, confirming full compatibility. The//nolint:exhaustructdirectives are appropriate for Docker SDK return types.
47-48: Add//nolint:gosecto line 47 for consistency with line 48.Both lines perform integer multiplications flagged by the gosec linter. Line 48 already suppresses this with
//nolint:gosec, but line 47 lacks the directive despite the same pattern. Apply the directive to line 47 to maintain consistency:- cpuNanos := int64(deployment.GetCpuMillicores()) * 1_000_000 // Convert millicores to nanoseconds + cpuNanos := int64(deployment.GetCpuMillicores()) * 1_000_000 //nolint:gosec // Intentional conversion memoryBytes := int64(deployment.GetMemorySizeMib()) * 1024 * 1024 //nolint:gosec // Intentional conversiongo/apps/ctrl/workflows/deploy/deploy_handler.go (1)
424-426: Good fix: preserve ProjectID/WorkspaceID in deployment steps.This addresses the earlier data loss report.
go/apps/api/routes/v2_keys_update_key/three_state_test.go (1)
121-121: Now I need to examine the actual test file to see if each parallel test gets proper isolation through NewHarness:Based on my investigation, I can now provide my analysis. The learning note confirms that workspace-scoped isolation is safe for concurrent execution. The script output shows the Harness definition and the t.Parallel() calls. Each call to
NewHarness(t)would create a new instance with a unique workspace via the seeder.Approve parallelization; each test is workspace-isolated through independent harness instances
The t.Parallel() adoption is solid. Based on the learnings and codebase patterns, each subtest that calls
NewHarness(t)receives its own seeder instance, which generates unique workspaceIDs via uid generation. Database operations are scoped to these workspaceIDs, providing logical isolation even when sharing a single ClickHouse instance. This pattern mirrors the approved pattern from the learnings (workspace-scoped isolation via uid.New(uid.WorkspacePrefix)).go/apps/api/routes/v2_keys_update_key/handler.go (1)
109-116: Review comment is incorrect —for attempt := range 3is valid Go 1.22+ syntaxThe project requires Go 1.25, which fully supports iterating over integers with
range. The codefor attempt := range 3compiles without errors and is idiomatic modern Go. No changes are necessary.Likely an incorrect or invalid review comment.
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
go/apps/api/routes/v2_identities_delete_identity/200_test.go (1)
41-54: Capture the CreateIdentity return value instead of querying the database.Discarding the
CreateIdentityreturn value on line 41 and then immediately querying the database (lines 48-54) to retrieve the same identity introduces an unnecessary round trip. Every other test case in this file (lines 98, 133, 163, 210, 222, 258, 278) captures the return value directly and uses it.Apply this diff to match the pattern used consistently throughout the file:
- _ = h.CreateIdentity(seed.CreateIdentityRequest{ + identity := h.CreateIdentity(seed.CreateIdentityRequest{ WorkspaceID: h.Resources().UserWorkspace.ID, ExternalID: externalID, Meta: []byte("{}"), }) - // Verify identity exists before deletion - identity, err := db.Query.FindIdentityByExternalID(ctx, h.DB.RO(), db.FindIdentityByExternalIDParams{ - WorkspaceID: h.Resources().UserWorkspace.ID, - ExternalID: externalID, - Deleted: false, - }) - require.NoError(t, err) - require.Equal(t, externalID, identity.ExternalID) - // Delete the identity via APIgo/apps/ctrl/services/deployment/create_deployment_simple_test.go (1)
425-439: Add Status and CreatedAt fields to the test params construction.The actual service initializes both fields (Status:
db.DeploymentsStatusPending, CreatedAt:now), but the test omits them. These are required fields inInsertDeploymentParamsand must be explicitly initialized to satisfy exhaustruct checks. Update the test params at lines 425-439 to include:Status: db.DeploymentsStatusPending, CreatedAt: 1724251845000,
♻️ Duplicate comments (1)
go/apps/ctrl/workflows/deploy/deploy_handler.go (1)
243-263: Proto getter usage is correct; hardcoded CPU/memory issue remains.The switch to
GetId(),GetAddress(), andGetStatus()getters is correct and consistent. Thenolint: godoxdirective appropriately suppresses the TODO warning.However, the hardcoded
CpuMillicores: 1000andMemoryMb: 1024(lines 249-250) still conflict with the deployment configuration values of 512 and 512 (lines 185-186). This mismatch was previously flagged and remains unresolved.
🧹 Nitpick comments (8)
go/apps/ctrl/services/deployment/create_deployment.go (1)
104-119: Consider consolidating the duplicate switch statements.Both switch blocks handle the same source types (
BuildContextvsDockerImage). While the first switch extracts values for logging and database insertion, and the second populates the proto message, consolidating them could reduce duplication and improve maintainability.Also applies to: 184-190
go/pkg/testutil/seed/seed.go (3)
44-44: LGTM - Appropriate linter suppressions.The nolint directives are justified for these cases. Minor style note: there's an inconsistency in comment spacing (line 44:
//nolintvs line 161:// nolint). While both work, consistent spacing improves readability.Apply this diff for consistency:
- Resources: Resources{}, //nolint:exhaustruct + Resources: Resources{}, // nolint:exhaustructAlso applies to: 161-161
260-274: Revert to idiomatic short variable declaration.The pre-declaration of
encryptionon line 260 is unnecessary and makes the code less idiomatic. The short variable declaration (:=) is preferred in Go for this use case sinceEncryptreturns an already-initialized pointer.Apply this diff to restore idiomatic Go:
- var encryption *vaultv1.EncryptResponse - encryption, err = s.Vault.Encrypt(ctx, &vaultv1.EncryptRequest{ + encryption, err := s.Vault.Encrypt(ctx, &vaultv1.EncryptRequest{ Keyring: req.WorkspaceID, Data: key, })If the linter complains about uninitialized fields in
EncryptRequest, add a nolint directive instead:encryption, err := s.Vault.Encrypt(ctx, &vaultv1.EncryptRequest{ // nolint:exhaustruct Keyring: req.WorkspaceID, Data: key, })
180-180: Fix field ordering inconsistency in UpdatedAt initialization.The UpdatedAt fields are correctly initialized to NULL (
Valid: false) for new records. However, there's inconsistent field ordering:
- Line 347:
sql.NullInt64{Int64: 0, Valid: false}- Lines 359, 408, 453, 489:
sql.NullInt64{Valid: false, Int64: 0}Apply this diff for consistency:
- UpdatedAt: sql.NullInt64{Int64: 0, Valid: false}, + UpdatedAt: sql.NullInt64{Valid: false, Int64: 0},Also applies to: 295-295, 347-347, 359-359, 408-408, 453-453, 489-489
go/apps/ctrl/services/build/backend/depot/create_build.go (2)
48-56: Consider consistent parameter naming in validation messages.The validation messages use
build_context_path,unkey_project_id, anddeploymentID(line 55). For consistency, consider usingdeployment_idto match the snake_case pattern of the other parameters.Apply this diff for consistency:
- assert.NotEmpty(deploymentID, "deploymentID is required"), + assert.NotEmpty(deploymentID, "deployment_id is required"),
129-130: Replace overly broadnolint: alldirective.The
//nolint: alldirective on line 129 is too permissive and suppresses all linters. If the intent is to suppress the error check onRelease(), use a more specific directive like//nolint: errcheck.Apply this diff to be more specific:
- //nolint: all + //nolint: errcheck defer buildkit.Release()go/apps/ctrl/services/build/backend/docker/create_build.go (2)
32-42: Consider consistent parameter naming in validation messages.Similar to the depot backend, line 39 uses
deploymentIDwhile lines 37-38 use snake_case. For consistency, considerdeployment_id.Apply this diff:
- assert.NotEmpty(deploymentID, "deploymentID is required"), + assert.NotEmpty(deploymentID, "deployment_id is required"),
157-161: Consider adding a comment for the empty DepotProjectId field.Since this is the Docker backend (not Depot), setting
DepotProjectIdto an empty string is correct. However, a brief comment explaining this would help future maintainers understand why it differs from the Depot backend implementation.Example:
return connect.NewResponse(&ctrlv1.CreateBuildResponse{ - DepotProjectId: "", + DepotProjectId: "", // Empty for Docker backend; only populated by Depot backend ImageName: imageName, BuildId: buildID, }), nil
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (18)
go/apps/api/routes/v2_identities_delete_identity/200_test.go(1 hunks)go/apps/api/routes/v2_identities_update_identity/handler.go(1 hunks)go/apps/api/routes/v2_keys_verify_key/200_test.go(2 hunks)go/apps/ctrl/run.go(3 hunks)go/apps/ctrl/services/build/backend/depot/create_build.go(10 hunks)go/apps/ctrl/services/build/backend/depot/generate_upload_url.go(2 hunks)go/apps/ctrl/services/build/backend/docker/create_build.go(8 hunks)go/apps/ctrl/services/build/backend/docker/generate_upload_url.go(2 hunks)go/apps/ctrl/services/build/storage/s3.go(4 hunks)go/apps/ctrl/services/deployment/create_deployment.go(1 hunks)go/apps/ctrl/services/deployment/create_deployment_simple_test.go(1 hunks)go/apps/ctrl/workflows/deploy/deploy_handler.go(13 hunks)go/apps/krane/backend/docker/service.go(2 hunks)go/apps/krane/config.go(0 hunks)go/cmd/ctrl/main.go(2 hunks)go/cmd/deploy/control_plane.go(4 hunks)go/cmd/deploy/main.go(3 hunks)go/pkg/testutil/seed/seed.go(9 hunks)
💤 Files with no reviewable changes (1)
- go/apps/krane/config.go
✅ Files skipped from review due to trivial changes (2)
- go/apps/api/routes/v2_keys_verify_key/200_test.go
- go/apps/ctrl/services/build/storage/s3.go
🚧 Files skipped from review as they are similar to previous changes (4)
- go/apps/ctrl/run.go
- go/apps/krane/backend/docker/service.go
- go/cmd/deploy/main.go
- go/cmd/ctrl/main.go
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-15T10:12:40.810Z
Learnt from: Flo4604
PR: unkeyed/unkey#4098
File: go/proto/ctrl/v1/deployment.proto:33-36
Timestamp: 2025-10-15T10:12:40.810Z
Learning: In the Unkey codebase proto files (ctrl/v1/build.proto, ctrl/v1/deployment.proto, hydra/v1/deployment.proto), use `dockerfile_path` (not `docker_file_path`) for consistency in generated Go field names.
Applied to files:
go/apps/ctrl/services/build/backend/docker/create_build.go
🧬 Code graph analysis (5)
go/apps/ctrl/services/build/backend/docker/create_build.go (2)
go/gen/proto/ctrl/v1/build.pb.go (3)
CreateBuildResponse(92-99)CreateBuildResponse(112-112)CreateBuildResponse(127-129)internal/proto/generated/ctrl/v1/build_pb.ts (1)
CreateBuildResponse(60-81)
go/apps/ctrl/services/build/backend/depot/create_build.go (3)
go/gen/proto/ctrl/v1/build.pb.go (3)
CreateBuildRequest(24-32)CreateBuildRequest(45-45)CreateBuildRequest(60-62)go/pkg/db/project_update_depot_id.sql_generated.go (1)
UpdateProjectDepotIDParams(21-25)go/pkg/db/types/null_string.go (1)
NullString(10-10)
go/apps/api/routes/v2_identities_delete_identity/200_test.go (1)
go/pkg/testutil/seed/seed.go (1)
CreateIdentityRequest(368-373)
go/apps/ctrl/workflows/deploy/deploy_handler.go (7)
go/pkg/db/models_generated.go (2)
DeploymentsStatusFailed(201-201)Project(714-727)go/gen/proto/ctrl/v1/build.pb.go (3)
CreateBuildResponse(92-99)CreateBuildResponse(112-112)CreateBuildResponse(127-129)go/gen/proto/ctrl/v1/deployment.pb.go (9)
CreateDeploymentRequest(136-154)CreateDeploymentRequest(167-167)CreateDeploymentRequest(182-184)GetDeploymentResponse(486-491)GetDeploymentResponse(504-504)GetDeploymentResponse(519-521)GetDeploymentRequest(442-447)GetDeploymentRequest(460-460)GetDeploymentRequest(475-477)internal/proto/generated/ctrl/v1/deployment_pb.ts (3)
CreateDeploymentRequest(21-74)GetDeploymentResponse(197-202)GetDeploymentRequest(179-184)go/gen/proto/krane/v1/deployment.pb.go (10)
CreateDeploymentRequest(160-165)CreateDeploymentRequest(178-178)CreateDeploymentRequest(193-195)GetDeploymentResponse(476-481)GetDeploymentResponse(494-494)GetDeploymentResponse(509-511)GetDeploymentRequest(424-430)GetDeploymentRequest(443-443)GetDeploymentRequest(458-460)DeploymentStatus_DEPLOYMENT_STATUS_RUNNING(29-29)go/pkg/partition/db/models_generated.go (1)
VmsStatus(57-57)go/gen/proto/partition/v1/gateway.pb.go (9)
Project(156-162)Project(175-175)Project(190-192)AuthConfig(253-258)AuthConfig(271-271)AuthConfig(286-288)ValidationConfig(298-304)ValidationConfig(317-317)ValidationConfig(332-334)
go/pkg/testutil/seed/seed.go (3)
go/pkg/db/models_generated.go (1)
Key(640-664)go/gen/proto/vault/v1/service.pb.go (6)
EncryptResponse(156-162)EncryptResponse(175-175)EncryptResponse(190-192)EncryptRequest(104-110)EncryptRequest(123-123)EncryptRequest(138-140)apps/dashboard/lib/vault.ts (1)
Vault(20-76)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Test Go API Local / Test
- GitHub Check: Build / Build
- GitHub Check: Test API / API Test Local
🔇 Additional comments (21)
go/apps/api/routes/v2_identities_update_identity/handler.go (1)
350-350: LGTM! Unnecessary cast removed.The removal of the
int64()cast is correct. ThenewRL.Limitfield from the API request is alreadyint64, matching the expected type foropenapi.RatelimitResponse.Limit. The cast at line 373 remains appropriate sincerl.Limitcomes from the database asint32and requires conversion.go/cmd/deploy/control_plane.go (3)
110-110: LGTM! Using the constant is idiomatic Go.Replacing the hardcoded "PUT" string with
http.MethodPuteliminates magic strings and follows Go best practices.
153-153: LGTM! Reusing the error variable is clearer.Changing from
:=to=correctly reuses the existingerrvariable instead of shadowing it, which addresses linter warnings and improves code clarity.Also applies to: 164-164
190-190: LGTM! Explicit nil initialization addresses exhaustruct linting.The explicit
Source: nilinitialization satisfies the exhaustruct linter's requirement for complete struct field initialization. The field is conditionally overwritten in lines 204-217, so this has no functional impact.go/apps/ctrl/services/deployment/create_deployment.go (1)
177-181: LGTM! Explicit field initialization satisfies exhaustruct linter.The explicit nil initialization of pointer fields correctly addresses the exhaustruct linter requirements while maintaining the existing logic—the switch statement immediately below (lines 184-190) sets the appropriate fields based on the deployment source.
go/pkg/testutil/seed/seed.go (1)
243-248: LGTM - Good practice for slice initialization.Explicitly initializing slices with empty values rather than leaving them nil is good practice. This ensures consistent behavior, especially for JSON serialization where
[]is preferred overnull.go/apps/ctrl/services/build/backend/depot/generate_upload_url.go (1)
17-36: LGTM: Clean refactor to proto getter pattern.The switch from direct field access to
GetUnkeyProjectId()with proper validation and consistent local variable usage is a good improvement.go/apps/ctrl/services/build/backend/depot/create_build.go (3)
27-31: LGTM: Well-defined cache policy constants.Extracting these magic numbers into named constants improves maintainability and makes the cache policy configuration explicit.
164-196: LGTM: Appropriate exhaustruct suppressions.The
exhaustructnolint directives are reasonable for these structs with many optional fields. The directives are specific and align with the PR objectives.
258-291: LGTM: Proper getter usage and informative nolint directives.The switch to getters (
GetProject().GetProjectId()) and the nolint comments with explanations for optional/deprecated fields are well-handled.go/apps/ctrl/services/build/backend/docker/generate_upload_url.go (1)
17-36: LGTM: Consistent with depot backend.The refactor matches the depot backend implementation, ensuring consistency across both build backends.
go/apps/ctrl/services/build/backend/docker/create_build.go (1)
95-102: LGTM: Appropriate exhaustruct suppression.The nolint directive is specific and justified for this struct with optional fields.
go/apps/ctrl/workflows/deploy/deploy_handler.go (9)
44-44: LGTM: Proto getter usage is correct.Using
GetDeploymentId()instead of direct field access is the recommended pattern for protobuf messages, providing nil-safety and consistency.
131-132: LGTM: Explicit variable declaration improves type clarity.The explicit
var buildRespdeclaration before assignment is consistent with the exhaustruct linter requirements mentioned in the PR objectives.
179-191: LGTM: Deployment correctly uses the dockerImage variable.The code correctly uses the
dockerImagevariable (line 183) which is populated from either the build result or the pre-built image. The error message (line 190) also correctly referencesdockerImage. This resolves the concern raised in previous reviews.
210-214: LGTM: Polling logic correctly uses deployment ID.The explicit response variable declaration and the use of
deployment.IDfor theDeploymentIdfield are both correct.
226-241: LGTM: Proto getters used consistently for status checks.The switch to
GetStatus()getter (lines 226, 231) maintains nil-safety while preserving the status mapping logic.
280-314: LGTM: OpenAPI scraping correctly uses proto getters and explicit variable declarations.The explicit declarations for
resp(line 285) andspecBytes(line 299) align with linter requirements, and the consistent use ofGetAddress()throughout the scraping logic is appropriate.
337-365: LGTM: Explicit nil initialization satisfies linter requirements.The explicit
nilinitialization forProject,AuthConfig, andValidationConfig(lines 338-340) addresses exhaustruct warnings while maintaining the correct conditional population logic. The use ofvm.GetId()(line 350) is consistent with the getter pattern.
407-420: LGTM: Explicit IsRolledBack field correctly tracks deployment state.The addition of
IsRolledBack: false(line 411) is correct. This code path is only reached for non-rolled-back production deployments (line 407 condition), and explicitly settingIsRolledBack: falseensures the project state is correctly persisted in the database.
423-435: LGTM: ProjectID and WorkspaceID are correctly populated.The
ProjectIDandWorkspaceIDfields (lines 425-426) are now correctly set todeployment.ProjectIDanddeployment.WorkspaceID, ensuring proper tracking and analytics data. This resolves the concern raised in previous reviews about empty string values.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (4)
go/apps/gw/server/middleware_errors.go (2)
156-164: Simplify HTML/JSON routing; remove redundant URN list.The second switch both disables exhaustive lint and enumerates “all other errors” to return JSON anyway. Prefer a default that falls through to the JSON response to reduce duplication and drift.
Apply this diff:
switch urn { case codes.UnkeyGatewayErrorsRoutingConfigNotFound: return s.HTML(http.StatusNotFound, []byte(`<!DOCTYPE html>...`)) case codes.UnkeyGatewayErrorsProxyBadGateway, codes.UnkeyGatewayErrorsProxyProxyForwardFailed: return s.HTML(http.StatusBadGateway, []byte(`<!DOCTYPE html>...`)) case codes.UnkeyGatewayErrorsProxyServiceUnavailable: return s.HTML(http.StatusServiceUnavailable, []byte(`<!DOCTYPE html>...`)) case codes.UnkeyGatewayErrorsProxyGatewayTimeout: return s.HTML(http.StatusGatewayTimeout, []byte(`<!DOCTYPE html>...`)) - // All other errors (including App, Auth, Data errors) return JSON - case codes.UserErrorsBadRequestPermissionsQuerySyntaxError, - codes.UserErrorsBadRequestRequestBodyTooLarge, - codes.UserErrorsBadRequestRequestTimeout, - codes.UserErrorsBadRequestClientClosedRequest, - codes.UnkeyAuthErrorsAuthenticationMissing, - codes.UnkeyAuthErrorsAuthenticationMalformed, - codes.UnkeyAuthErrorsAuthenticationKeyNotFound, - codes.UnkeyAuthErrorsAuthorizationInsufficientPermissions, - codes.UnkeyAuthErrorsAuthorizationForbidden, - codes.UnkeyAuthErrorsAuthorizationKeyDisabled, - codes.UnkeyAuthErrorsAuthorizationWorkspaceDisabled, - codes.UnkeyDataErrorsKeyNotFound, - codes.UnkeyDataErrorsWorkspaceNotFound, - codes.UnkeyDataErrorsApiNotFound, - codes.UnkeyDataErrorsPermissionDuplicate, - codes.UnkeyDataErrorsPermissionNotFound, - codes.UnkeyDataErrorsRoleDuplicate, - codes.UnkeyDataErrorsRoleNotFound, - codes.UnkeyDataErrorsKeyAuthNotFound, - codes.UnkeyDataErrorsRatelimitNamespaceNotFound, - codes.UnkeyDataErrorsRatelimitNamespaceGone, - codes.UnkeyDataErrorsRatelimitOverrideNotFound, - codes.UnkeyDataErrorsIdentityNotFound, - codes.UnkeyDataErrorsIdentityDuplicate, - codes.UnkeyDataErrorsAuditLogNotFound, - codes.UnkeyAppErrorsInternalUnexpectedError, - codes.UnkeyAppErrorsInternalServiceUnavailable, - codes.UnkeyAppErrorsValidationInvalidInput, - codes.UnkeyAppErrorsValidationAssertionFailed, - codes.UnkeyAppErrorsProtectionProtectedResource, - codes.UnkeyAppErrorsPreconditionPreconditionFailed, - codes.UnkeyGatewayErrorsValidationRequestInvalid, - codes.UnkeyGatewayErrorsValidationResponseInvalid, - codes.UnkeyGatewayErrorsAuthUnauthorized, - codes.UnkeyGatewayErrorsAuthRateLimited, - codes.UnkeyGatewayErrorsInternalInternalServerError, - codes.UnkeyGatewayErrorsInternalKeyVerificationFailed, - codes.UnkeyGatewayErrorsRoutingVMSelectionFailed: - // Return JSON for these errors + default: + // All other errors return JSON below }Also applies to: 224-264
148-154: Include URN in logs for better diagnostics.Add the parsed URN (or code) into the log context to aid debugging and metrics joins.
logger.Error("gateway error", "error", err.Error(), "requestId", s.RequestID(), "publicMessage", fault.UserFacingMessage(err), "status", status, + "urn", string(urn), )go/pkg/zen/middleware_errors.go (2)
254-266: Optional: Map 502/503/504 explicitly instead of folding into 500.For better client semantics and caching behavior, consider dedicated branches for ProxyBadGateway (502), ProxyServiceUnavailable (503), and ProxyGatewayTimeout (504), similar to GW middleware.
29-35: Add URN to API error logs (parity with GW suggestion).Include the URN/code in log fields.
logger.Error("api error", "error", err.Error(), "requestId", s.RequestID(), "publicMessage", fault.UserFacingMessage(err), + "urn", string(urn), )
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
go/apps/gw/server/middleware_errors.go(3 hunks)go/pkg/zen/middleware_errors.go(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
go/apps/gw/server/middleware_errors.go (1)
go/pkg/codes/constants_gen.go (40)
UnkeyGatewayErrorsInternalInternalServerError(178-178)UnkeyAppErrorsInternalUnexpectedError(118-118)UnkeyAppErrorsInternalServiceUnavailable(120-120)UnkeyAppErrorsValidationAssertionFailed(127-127)UnkeyGatewayErrorsRoutingVMSelectionFailed(159-159)UnkeyGatewayErrorsValidationRequestInvalid(171-171)UnkeyGatewayErrorsValidationResponseInvalid(173-173)UnkeyAppErrorsValidationInvalidInput(125-125)UnkeyAuthErrorsAuthenticationMissing(31-31)UnkeyAuthErrorsAuthenticationMalformed(33-33)UnkeyGatewayErrorsAuthUnauthorized(164-164)UnkeyAuthErrorsAuthenticationKeyNotFound(35-35)UnkeyAuthErrorsAuthorizationForbidden(43-43)UnkeyAuthErrorsAuthorizationInsufficientPermissions(41-41)UnkeyAuthErrorsAuthorizationKeyDisabled(45-45)UnkeyAuthErrorsAuthorizationWorkspaceDisabled(47-47)UnkeyGatewayErrorsRoutingConfigNotFound(157-157)UnkeyDataErrorsKeyNotFound(56-56)UnkeyDataErrorsWorkspaceNotFound(61-61)UnkeyDataErrorsApiNotFound(66-66)UnkeyDataErrorsPermissionNotFound(73-73)UnkeyDataErrorsRoleNotFound(80-80)UnkeyDataErrorsKeyAuthNotFound(85-85)UnkeyDataErrorsRatelimitNamespaceNotFound(90-90)UnkeyDataErrorsRatelimitOverrideNotFound(97-97)UnkeyDataErrorsIdentityNotFound(102-102)UnkeyDataErrorsAuditLogNotFound(109-109)UserErrorsBadRequestRequestTimeout(20-20)UnkeyDataErrorsIdentityDuplicate(104-104)UnkeyDataErrorsRoleDuplicate(78-78)UnkeyDataErrorsPermissionDuplicate(71-71)UnkeyDataErrorsRatelimitNamespaceGone(92-92)UnkeyAppErrorsProtectionProtectedResource(132-132)UnkeyAppErrorsPreconditionPreconditionFailed(137-137)UserErrorsBadRequestRequestBodyTooLarge(18-18)UnkeyGatewayErrorsAuthRateLimited(166-166)UserErrorsBadRequestClientClosedRequest(22-22)UnkeyGatewayErrorsProxyBadGateway(146-146)UnkeyGatewayErrorsProxyProxyForwardFailed(152-152)UnkeyGatewayErrorsProxyServiceUnavailable(148-148)
go/pkg/zen/middleware_errors.go (1)
go/pkg/codes/constants_gen.go (12)
UnkeyGatewayErrorsRoutingConfigNotFound(157-157)UnkeyGatewayErrorsValidationRequestInvalid(171-171)UnkeyGatewayErrorsValidationResponseInvalid(173-173)UnkeyGatewayErrorsAuthUnauthorized(164-164)UnkeyGatewayErrorsProxyGatewayTimeout(150-150)UnkeyGatewayErrorsProxyBadGateway(146-146)UnkeyGatewayErrorsAuthRateLimited(166-166)UnkeyGatewayErrorsProxyServiceUnavailable(148-148)UnkeyGatewayErrorsProxyProxyForwardFailed(152-152)UnkeyGatewayErrorsRoutingVMSelectionFailed(159-159)UnkeyGatewayErrorsInternalInternalServerError(178-178)UnkeyGatewayErrorsInternalKeyVerificationFailed(180-180)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Test Go API Local / Test
- GitHub Check: Build / Build
- GitHub Check: Test API / API Test Local
🔇 Additional comments (4)
go/pkg/zen/middleware_errors.go (3)
46-48: LGTM: Not Found mapping extended.Adding RoutingConfigNotFound to 404 branch is consistent with GW handling.
64-67: LGTM: Bad Request coverage improved.Request/response validation and query‑syntax errors correctly map to 400.
140-142: LGTM: Unauthorized mapping clarified.Including Gateway Unauthorized alongside KeyNotFound is correct.
go/apps/gw/server/middleware_errors.go (1)
59-146: Verification passed: all 43 URN constants are mapped to HTTP status codes.Confirmation: Script confirmed 100% coverage—all constants in
constants_gen.goare referenced inmiddleware_errors.go. The primary status mapping switch is comprehensive and correctly handles all error codes across 4xx/5xx ranges.Minor optimization opportunity (not blocking): The secondary switch's explicit case list (lines 176-213) that routes to JSON is redundant. Consider replacing the exhaustive case list with a default case that returns JSON, eliminating the maintenance drift risk while keeping the same behavior.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
go/pkg/zen/middleware_errors.go (1)
46-48: 404 mappings added correctly; verify exhaustive coverage.Adding AuditLogNotFound and RoutingConfigNotFound to 404 looks right. Let’s programmatically confirm all not_found URNs are covered in this switch.
#!/bin/bash # List all *NotFound URNs and verify they're referenced in middleware_errors.go set -euo pipefail codes_file="go/pkg/codes/constants_gen.go" handler_file="go/pkg/zen/middleware_errors.go" mapfile -t not_found_syms < <(rg -nP '^\s*(\w+)\s+URN\s*=\s*".*:not_found:' "$codes_file" | cut -d: -f3- | awk '{print $1}') missing=0 for sym in "${not_found_syms[@]}"; do if ! rg -n "\b${sym}\b" "$handler_file" >/dev/null; then echo "missing mapping: ${sym}" ((missing++)) || true fi done echo "Total missing not_found mappings: ${missing}"
🧹 Nitpick comments (1)
go/pkg/zen/middleware_errors.go (1)
254-267: Retry-After header and dedicated 429 schema are valid improvements; both are optional enhancements.The Session API supports setting headers via
AddHeader()or directResponseWriter().Header().Add()access. No Retry-After headers are currently used in the codebase, and no dedicated TooManyRequestsErrorResponse schema exists. The current implementation correctly returns 429 with a BadRequestErrorResponse.If implementing Retry-After: ensure the rate-limit error propagates retry metadata through the fault error chain (currently
UserFacingMessage()only returns a message string). If adding a dedicated schema: update the OpenAPI code generator to produce TooManyRequestsErrorResponse and regenerate.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
go/pkg/zen/middleware_errors.go(5 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
go/pkg/zen/middleware_errors.go (3)
go/pkg/codes/constants_gen.go (18)
UnkeyDataErrorsAuditLogNotFound(109-109)UnkeyGatewayErrorsRoutingConfigNotFound(157-157)UserErrorsBadRequestPermissionsQuerySyntaxError(16-16)UnkeyGatewayErrorsValidationRequestInvalid(171-171)UnkeyGatewayErrorsValidationResponseInvalid(173-173)UnkeyAuthErrorsAuthenticationKeyNotFound(35-35)UnkeyGatewayErrorsAuthUnauthorized(164-164)UnkeyGatewayErrorsAuthRateLimited(166-166)UnkeyAppErrorsInternalUnexpectedError(118-118)UnkeyGatewayErrorsProxyGatewayTimeout(150-150)UnkeyGatewayErrorsProxyBadGateway(146-146)UnkeyGatewayErrorsProxyServiceUnavailable(148-148)UnkeyAppErrorsInternalServiceUnavailable(120-120)UnkeyAppErrorsValidationAssertionFailed(127-127)UnkeyGatewayErrorsProxyProxyForwardFailed(152-152)UnkeyGatewayErrorsRoutingVMSelectionFailed(159-159)UnkeyGatewayErrorsInternalInternalServerError(178-178)UnkeyGatewayErrorsInternalKeyVerificationFailed(180-180)go/apps/api/openapi/gen.go (4)
BadRequestErrorResponse(65-71)Meta(279-282)BadRequestErrorDetails(47-62)ValidationError(1966-1980)go/pkg/fault/wrapped.go (1)
UserFacingMessage(161-195)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Build / Build
- GitHub Check: Test Go API Local / Test
- GitHub Check: Test API / API Test Local
🔇 Additional comments (3)
go/pkg/zen/middleware_errors.go (3)
64-67: 400 validation mappings look good.New 400 cases align with Validation*Invalid and permissions query syntax error patterns; payload shape stays consistent.
125-137: 410 Gone response is appropriate.RatelimitNamespaceGone → 410 with a GoneErrorResponse and BaseError fields is correct.
139-142: 401 coverage improved.Including AuthUnauthorized alongside AuthenticationKeyNotFound reduces misclassified 403/500s.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
go/internal/services/usagelimiter/redis.go (1)
156-158: Won’t compile:for range replayWorkersis invalid over anint.Use a counted loop to spawn N workers.
- for range replayWorkers { - go s.replayRequests() - } + for i := 0; i < replayWorkers; i++ { + go s.replayRequests() + }go/apps/api/routes/v2_ratelimit_limit/handler.go (1)
80-87: Avoid assigning to captured outererrinside loader closure (data race with async revalidation).SWR executes loaders asynchronously during stale cache revalidation. The
loaderclosure at line 78 captures the outererrvariable (declared at line 71), and line 80–81 assigns to it using=:response, err = db.WithRetryContext(ctx, func() (db.FindRatelimitNamespaceRow, error) {When the cache entry is stale, this closure runs in a background goroutine, creating a data race with the main goroutine reading/writing the same
errat line 120. Fix by shadowingerrlocally:- response, err = db.WithRetryContext(ctx, func() (db.FindRatelimitNamespaceRow, error) { + response, err := db.WithRetryContext(ctx, func() (db.FindRatelimitNamespaceRow, error) {
♻️ Duplicate comments (1)
go/apps/api/routes/v2_ratelimit_delete_override/handler.go (1)
64-66: Avoid assigning to outererrinside the cache loader closure.This pattern of pre-declaring
var responseand then assigningresponse, err = db.WithRetryContext(...)captures and mutates the outererrvariable, which can cause confusing control flow. Use short declaration (:=) instead to create local variables within the SWR callback scope.Apply this diff to use short declaration:
- var response db.FindRatelimitNamespaceRow - response, err = db.WithRetryContext(ctx, func() (db.FindRatelimitNamespaceRow, error) { + response, err := db.WithRetryContext(ctx, func() (db.FindRatelimitNamespaceRow, error) { return db.Query.FindRatelimitNamespace(ctx, h.DB.RO(), db.FindRatelimitNamespaceParams{ WorkspaceID: auth.AuthorizedWorkspaceID, Namespace: req.Namespace, }) })
🧹 Nitpick comments (5)
go/internal/services/usagelimiter/redis.go (4)
216-232: Signature reduction ofhandleResultis reasonable; tighten int cast to remove nolint.Returning only
UsageResponsesimplifies callers. However, theint64→int32casts are suppressed via//nolint:gosec; prefer explicit bound-check to keep the linter on.Apply within this function:
- return UsageResponse{Valid: true, Remaining: int32(remaining)} //nolint: gosec + return UsageResponse{Valid: true, Remaining: toInt32(remaining)} ... - return UsageResponse{Valid: false, Remaining: int32(remaining)} //nolint: gosec + return UsageResponse{Valid: false, Remaining: toInt32(remaining)}Add helper and import (outside this hunk):
--- a/go/internal/services/usagelimiter/redis.go +++ b/go/internal/services/usagelimiter/redis.go @@ -import ( +import ( "context" "database/sql" "fmt" + "math" "time" @@ ) + +func toInt32(n int64) int32 { + if n > int64(math.MaxInt32) { + return math.MaxInt32 + } + if n < int64(math.MinInt32) { + return math.MinInt32 + } + return int32(n) +}
193-195: Consider emitting a fallback metric when delegating to DB.You already log the fallback; adding a counter helps alerting and SLOs.
Example:
- return s.dbFallback.Limit(ctx, req) + metrics.UsagelimiterFallbackOperations.WithLabelValues("redis_error").Inc() + return s.dbFallback.Limit(ctx, req)Repeat similarly in the other fallback sites with labels indicating the reason (e.g., "init_set_error", "post_init_decr_error").
Also applies to: 272-275, 291-293
255-257: Log when an unlimited key reaches the limiter.This is a logic anomaly per the comment; log at warn to aid diagnosis.
- // Return valid anyway to not break existing behavior, but this is a logic error + // Return valid anyway to not break existing behavior, but this is a logic error + s.logger.Warn("usage limiter called for unlimited key", "keyId", req.KeyId)
368-385: Avoid CPU spin in shutdown drain loop on timeout path.The non-blocking receive with a tight loop can busy-spin. Add a short sleep when no item is available.
- for { + for { remaining := s.replayBuffer.Size() if remaining == 0 { s.logger.Debug("successfully drained all remaining buffer items") break } // Process remaining items directly select { case change := <-s.replayBuffer.Consume(): err := s.syncWithDB(context.Background(), change) if err != nil { s.logger.Error("failed to sync credit change during shutdown", "error", err) } default: - // Channel is closed and empty + // Channel empty; avoid busy-spin + time.Sleep(10 * time.Millisecond) } }go/apps/api/routes/v2_ratelimit_limit/handler.go (1)
148-176: Keep transactionalerrandnamespacelocal within the closure.These assignments also target outer vars (
err,namespace). Prefer local vars to avoid hidden coupling and accidental races.- //nolint: exhaustruct result := db.FindRatelimitNamespace{} now := time.Now().UnixMilli() id := uid.New(uid.RatelimitNamespacePrefix) - err = db.Query.InsertRatelimitNamespace(ctx, tx, db.InsertRatelimitNamespaceParams{ + err := db.Query.InsertRatelimitNamespace(ctx, tx, db.InsertRatelimitNamespaceParams{ ID: id, WorkspaceID: auth.AuthorizedWorkspaceID, Name: req.Namespace, CreatedAt: now, }) if err != nil && !db.IsDuplicateKeyError(err) { return result, fault.Wrap(err, fault.Code(codes.App.Internal.UnexpectedError.URN()), fault.Public("An unexpected error occurred while creating the namespace."), ) } if db.IsDuplicateKeyError(err) { - namespace, err = loader(ctx) + ns, err := loader(ctx) if err != nil { return result, fault.Wrap(err, fault.Code(codes.App.Internal.UnexpectedError.URN()), fault.Public("An unexpected error occurred while fetching the namespace."), ) } - return namespace, err + return ns, err }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (5)
go/apps/api/routes/v2_ratelimit_delete_override/handler.go(1 hunks)go/apps/api/routes/v2_ratelimit_limit/handler.go(4 hunks)go/internal/services/keys/get.go(2 hunks)go/internal/services/usagelimiter/redis.go(5 hunks)go/pkg/db/retry_test.go(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-29T13:48:43.353Z
Learnt from: Flo4604
PR: unkeyed/unkey#3884
File: go/apps/api/routes/v2_ratelimit_delete_override/handler.go:218-228
Timestamp: 2025-08-29T13:48:43.353Z
Learning: In the unkeyed/unkey codebase, when working with ratelimit namespace caching, req.Namespace parameter is either the namespace ID or the namespace name, so cache invalidation only needs to remove cache keys for namespace.ID and namespace.Name - no need to also remove req.Namespace as a separate key.
Applied to files:
go/apps/api/routes/v2_ratelimit_delete_override/handler.go
🧬 Code graph analysis (4)
go/internal/services/keys/get.go (2)
go/pkg/db/key_find_for_verification.sql_generated.go (1)
FindKeyForVerificationRow(92-119)go/pkg/db/retry.go (1)
WithRetryContext(36-46)
go/apps/api/routes/v2_ratelimit_limit/handler.go (3)
go/pkg/db/ratelimit_namespace_find.sql_generated.go (1)
FindRatelimitNamespaceRow(37-45)go/pkg/db/retry.go (1)
WithRetryContext(36-46)go/pkg/db/ratelimit_namespace_insert.sql_generated.go (1)
InsertRatelimitNamespaceParams(33-38)
go/internal/services/usagelimiter/redis.go (2)
go/internal/services/usagelimiter/interface.go (2)
UsageRequest(18-21)UsageResponse(23-26)go/pkg/prometheus/metrics/usagelimiter.go (1)
UsagelimiterDecisions(20-29)
go/apps/api/routes/v2_ratelimit_delete_override/handler.go (2)
go/pkg/db/ratelimit_namespace_find.sql_generated.go (1)
FindRatelimitNamespaceRow(37-45)go/pkg/db/retry.go (1)
WithRetryContext(36-46)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Test Go API Local / Test
- GitHub Check: Test API / API Test Local
- GitHub Check: Build / Build
🔇 Additional comments (5)
go/internal/services/keys/get.go (1)
73-76: LGTM: Variable scope change for linter compliance.Moving the
rowdeclaration outside theWithRetryContextcall is a safe refactor. The zero value is never used due to the error check at line 77, and the change satisfies linter requirements without affecting behavior.go/internal/services/usagelimiter/redis.go (3)
278-286: Init-from-DB call-sites updated correctly.Both branches returning
s.handleResult(...), nilalign with the new signature and preserve behavior. No further action here.
296-297: Post-initialization decrement path updated correctly.Adapts to single-value
handleResultand returns(resp, nil). Looks good.
203-204: Verified: all call sites correctly adapted to single return value.All four
handleResultcall sites (lines 203, 281, 284, 296) properly wrap the result with, nilto satisfy their return signatures. No remaining two-value assignments or error handling attempts fromhandleResult.go/apps/api/routes/v2_ratelimit_limit/handler.go (1)
350-356: Explicit zero-value override struct looks good.Clear, exhaustruct-friendly, and preserves behavior with the
foundflag.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
go/apps/api/routes/v2_apis_list_keys/handler.go (1)
379-381: Critical: Nil pointer dereference when identity ratelimits exist without identity object.If
keyData.Identityis nil (line 308),response.Identityremains nil. However, ratelimits withIdentityID.Validcan still populateidentityRatelimits(line 371), and line 380 will panic when attempting to setresponse.Identity.Ratelimitson a nil pointer.Apply this diff to check if
response.Identityexists before setting identity ratelimits:if len(identityRatelimits) > 0 { + if response.Identity != nil { response.Identity.Ratelimits = &identityRatelimits + } }Alternatively, ensure
response.Identityis always initialized when identity ratelimits exist:if len(identityRatelimits) > 0 { + if response.Identity == nil { + // Ratelimits exist but identity wasn't loaded - skip or log warning + h.Logger.Warn("identity ratelimits found but identity object is nil", "keyId", keyData.Key.ID) + } else { response.Identity.Ratelimits = &identityRatelimits + } }
♻️ Duplicate comments (1)
go/pkg/db/retry_test.go (1)
300-301: Variable scoping issue already flagged in previous review.This line reuses the outer
errvariable instead of declaring a local one, creating confusing control flow inconsistent with the pattern at lines 244-245. The previous review comment comprehensively addresses this issue with a proposed fix.
🧹 Nitpick comments (2)
go/apps/ctrl/services/deployment/create_deployment.go (1)
176-191: Consider consolidating the two-step initialization.The explicit
nilassignments at lines 177-179 are immediately followed by conditional reassignment at lines 184-190. While this satisfies the exhaustruct linter, it introduces redundancy.You could simplify by removing the explicit
nilassignments and relying on Go's default zero values, or by conditionally building the entire struct in one step:- deployReq := &hydrav1.DeployRequest{ - BuildContextPath: nil, - DockerfilePath: nil, - DockerImage: nil, - DeploymentId: deploymentID, - KeyAuthId: keySpaceID, - } - - switch source := req.Msg.GetSource().(type) { + var deployReq *hydrav1.DeployRequest + switch source := req.Msg.GetSource().(type) { case *ctrlv1.CreateDeploymentRequest_BuildContext: - deployReq.BuildContextPath = proto.String(source.BuildContext.GetBuildContextPath()) - deployReq.DockerfilePath = source.BuildContext.DockerfilePath + deployReq = &hydrav1.DeployRequest{ + BuildContextPath: proto.String(source.BuildContext.GetBuildContextPath()), + DockerfilePath: source.BuildContext.DockerfilePath, + DeploymentId: deploymentID, + KeyAuthId: keySpaceID, + } case *ctrlv1.CreateDeploymentRequest_DockerImage: - deployReq.DockerImage = proto.String(source.DockerImage) + deployReq = &hydrav1.DeployRequest{ + DockerImage: proto.String(source.DockerImage), + DeploymentId: deploymentID, + KeyAuthId: keySpaceID, + } }Note: If exhaustruct requires all fields to be present in every literal, you may need to keep explicit
nilvalues in the refactored version.go/pkg/testutil/seed/seed.go (1)
338-366: Minor inconsistency in field ordering (nitpick).The
UpdatedAtfield is correctly initialized to an invalid null for newly created ratelimits. However, there's a minor inconsistency in field ordering:
- Line 347:
sql.NullInt64{Int64: 0, Valid: false}- Line 359:
sql.NullInt64{Valid: false, Int64: 0}While functionally equivalent, consistent field ordering improves readability. Consider using the same order throughout the file.
Apply this diff to standardize field ordering:
return db.Ratelimit{ ID: ratelimitID, Name: req.Name, WorkspaceID: req.WorkspaceID, CreatedAt: createdAt, - UpdatedAt: sql.NullInt64{Valid: false, Int64: 0}, + UpdatedAt: sql.NullInt64{Int64: 0, Valid: false}, KeyID: sql.NullString{String: ptr.SafeDeref(req.KeyID, ""), Valid: req.KeyID != nil}, IdentityID: sql.NullString{String: ptr.SafeDeref(req.IdentityID, ""), Valid: req.IdentityID != nil}, Limit: req.Limit, Duration: req.Duration, AutoApply: req.AutoApply, }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
go/apps/api/integration/multi_node_usagelimiting/run.go(5 hunks)go/apps/api/routes/v2_apis_list_keys/handler.go(5 hunks)go/apps/api/routes/v2_keys_create_key/handler.go(4 hunks)go/apps/api/routes/v2_keys_reroll_key/200_test.go(1 hunks)go/apps/api/routes/v2_keys_reroll_key/handler.go(1 hunks)go/apps/api/routes/v2_keys_update_key/200_test.go(2 hunks)go/apps/api/routes/v2_keys_update_key/400_test.go(1 hunks)go/apps/api/routes/v2_keys_update_key/403_test.go(1 hunks)go/apps/api/routes/v2_keys_update_key/404_test.go(2 hunks)go/apps/api/routes/v2_keys_update_key/three_state_test.go(12 hunks)go/apps/api/routes/v2_keys_verify_key/200_test.go(2 hunks)go/apps/api/routes/v2_keys_verify_key/multiple_ratelimits_overcommit_test.go(1 hunks)go/apps/ctrl/services/deployment/create_deployment.go(1 hunks)go/pkg/clickhouse/key_verifications_test.go(1 hunks)go/pkg/db/key_data.go(3 hunks)go/pkg/db/retry_test.go(1 hunks)go/pkg/testutil/seed/seed.go(9 hunks)
✅ Files skipped from review due to trivial changes (1)
- go/apps/api/routes/v2_keys_verify_key/200_test.go
🚧 Files skipped from review as they are similar to previous changes (10)
- go/apps/api/routes/v2_keys_reroll_key/200_test.go
- go/pkg/clickhouse/key_verifications_test.go
- go/apps/api/routes/v2_keys_update_key/403_test.go
- go/apps/api/routes/v2_keys_update_key/400_test.go
- go/apps/api/routes/v2_keys_verify_key/multiple_ratelimits_overcommit_test.go
- go/apps/api/routes/v2_keys_update_key/200_test.go
- go/apps/api/routes/v2_keys_update_key/404_test.go
- go/apps/api/routes/v2_keys_update_key/three_state_test.go
- go/apps/api/integration/multi_node_usagelimiting/run.go
- go/pkg/db/key_data.go
🧰 Additional context used
🧠 Learnings (11)
📓 Common learnings
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3755
File: .github/workflows/job_detect_changes.yaml:111-114
Timestamp: 2025-08-08T16:10:00.224Z
Learning: Repo: unkeyed/unkey — When imeyer writes “make me an issue” on a PR thread, automatically open a thorough GitHub issue (Summary, Impact, Where, Repro, Observed vs Expected, Acceptance Criteria, Validation Plan, Out of Scope, References), include backlinks to the PR and comment, and assign to imeyer.
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3755
File: .github/workflows/runbook-freshness-check.yaml:157-173
Timestamp: 2025-08-08T14:59:52.283Z
Learning: Repo unkeyed/unkey: When a CI/workflow fix is deferred, imeyer prefers a thorough GitHub issue be opened with sections (Summary, Impact, Where, Repro, Observed vs Expected, Acceptance Criteria, Validation Plan, Out of Scope, References) and assigned to imeyer, including backlinks to the originating PR and comment.
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3755
File: .github/actions/setup-node/action.yaml:0-0
Timestamp: 2025-08-08T15:10:46.436Z
Learning: Repo: unkeyed/unkey — Preference: If imeyer comments “issue” on a PR thread, automatically open a thorough GitHub issue (Summary, Impact, Where, Observed vs Expected, Acceptance Criteria, Validation Plan, Out of Scope, References), include backlinks to the PR and comment, and assign to imeyer.
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3785
File: go/apps/api/routes/v2_keys_reroll_key/401_test.go:52-61
Timestamp: 2025-08-14T16:25:48.167Z
Learning: User Flo4604 requested creation of a GitHub issue to track converting all test files to use table-driven test patterns as a broader codebase improvement, following the suggestion made during review of go/apps/api/routes/v2_keys_reroll_key/401_test.go.
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3755
File: .github/actions/setup-node/action.yaml:0-0
Timestamp: 2025-08-08T15:10:46.436Z
Learning: For repo unkeyed/unkey and PR review workflows: When imeyer comments "issue" on a thread, automatically create a thorough GitHub issue (sections: Summary, Impact, Where, Repro/Observed vs Expected, Acceptance Criteria, Validation Plan, Out of Scope, References), include backlinks to the PR and the specific comment, and assign the issue to imeyer.
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 3297
File: apps/dashboard/lib/trpc/routers/authorization/roles/query.ts:210-323
Timestamp: 2025-06-04T20:13:12.060Z
Learning: The user ogzhanolguncu prefers explicit, duplicated code over abstracted helper functions when it improves readability, even if it means some duplication in filter building functions in the authorization roles query module.
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 2825
File: apps/dashboard/app/(app)/logs-v2/hooks/use-bookmarked-filters.ts:0-0
Timestamp: 2025-01-30T20:51:44.359Z
Learning: The user (ogzhanolguncu) prefers to handle refactoring suggestions in separate PRs to maintain focus in the current PR.
📚 Learning: 2025-07-16T15:38:53.491Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3606
File: go/pkg/db/replica.go:8-11
Timestamp: 2025-07-16T15:38:53.491Z
Learning: For debugging database replica usage in go/pkg/db/replica.go, it's acceptable to mark QueryRowContext operations as "success" even though SQL errors only surface during row.Scan() calls. The timing metrics are the primary concern for debugging replica performance patterns.
Applied to files:
go/pkg/db/retry_test.go
📚 Learning: 2025-08-21T15:54:45.198Z
Learnt from: chronark
Repo: unkeyed/unkey PR: 3825
File: go/internal/services/usagelimiter/limit.go:38-0
Timestamp: 2025-08-21T15:54:45.198Z
Learning: In go/internal/services/usagelimiter/limit.go, the UpdateKeyCreditsDecrement operation cannot be safely wrapped with db.WithRetry due to the lack of idempotency mechanisms in the current tech stack. Retrying this non-idempotent write operation risks double-charging users if the first attempt commits but the client sees a transient error.
Applied to files:
go/pkg/db/retry_test.gogo/apps/api/routes/v2_keys_reroll_key/handler.go
📚 Learning: 2025-10-30T15:10:52.710Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 4190
File: go/internal/services/keys/verifier.go:51-53
Timestamp: 2025-10-30T15:10:52.710Z
Learning: PR #4190 for unkeyed/unkey is focused solely on database schema and query changes for identity-based credits. It adds IdentityCredits and KeyCredits fields to structs and queries, but does not implement the priority enforcement logic in the usagelimiter. The logic implementation is intentionally deferred to a later PR in the stack.
Applied to files:
go/apps/api/routes/v2_keys_reroll_key/handler.gogo/apps/api/routes/v2_keys_create_key/handler.gogo/pkg/testutil/seed/seed.go
📚 Learning: 2025-08-08T15:09:01.312Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3753
File: go/apps/api/openapi/config.yaml:9-10
Timestamp: 2025-08-08T15:09:01.312Z
Learning: Repo unkeyed/unkey: In go/apps/api/openapi, oapi-codegen doesn’t support OpenAPI 3.1 union nullability; overlay.yaml must be applied before codegen. The overlay key in oapi-codegen config isn’t supported—use a pre-step (programmatic or CLI) to merge overlay into the bundled spec, then run oapi-codegen.
Applied to files:
go/apps/api/routes/v2_keys_reroll_key/handler.go
📚 Learning: 2025-08-14T17:45:14.448Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3785
File: go/apps/api/routes/v2_keys_reroll_key/handler.go:176-193
Timestamp: 2025-08-14T17:45:14.448Z
Learning: When rerolling keys in the v2 reroll endpoint, expiration dates should be copied from the original key to preserve business constraints like trial periods or temporary access policies. Users expect rerolled keys to maintain the same expiration limitations as the original.
Applied to files:
go/apps/api/routes/v2_keys_reroll_key/handler.go
📚 Learning: 2025-07-17T14:24:20.403Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3631
File: go/pkg/db/bulk_keyring_insert.sql.go:23-25
Timestamp: 2025-07-17T14:24:20.403Z
Learning: In go/pkg/db/bulk_keyring_insert.sql.go and similar bulk insert generated files, hardcoded zero values for fields like size_approx and size_last_updated_at are intentional and reflect the original SQL query structure, not missing parameters.
Applied to files:
go/apps/api/routes/v2_keys_reroll_key/handler.gogo/pkg/testutil/seed/seed.go
📚 Learning: 2025-08-14T16:25:48.167Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3785
File: go/apps/api/routes/v2_keys_reroll_key/401_test.go:52-61
Timestamp: 2025-08-14T16:25:48.167Z
Learning: User Flo4604 requested creation of a GitHub issue to track converting all test files to use table-driven test patterns as a broader codebase improvement, following the suggestion made during review of go/apps/api/routes/v2_keys_reroll_key/401_test.go.
Applied to files:
go/apps/api/routes/v2_keys_create_key/handler.go
📚 Learning: 2025-08-19T09:46:03.702Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3800
File: go/internal/services/usagelimiter/redis.go:176-186
Timestamp: 2025-08-19T09:46:03.702Z
Learning: Zero-cost requests in the usage limiter should not change the remaining credits but should still return the actual remaining credits for that key, not treat the key as unlimited. A key with 100 credits remaining should still report 100 credits remaining after a zero-cost request.
Applied to files:
go/apps/api/routes/v2_apis_list_keys/handler.go
📚 Learning: 2025-10-15T10:12:40.810Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 4098
File: go/proto/ctrl/v1/deployment.proto:33-36
Timestamp: 2025-10-15T10:12:40.810Z
Learning: In the Unkey codebase proto files (ctrl/v1/build.proto, ctrl/v1/deployment.proto, hydra/v1/deployment.proto), use `dockerfile_path` (not `docker_file_path`) for consistency in generated Go field names.
Applied to files:
go/apps/ctrl/services/deployment/create_deployment.go
📚 Learning: 2025-09-01T15:10:44.959Z
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3899
File: go/proto/metald/v1/deployment.proto:7-15
Timestamp: 2025-09-01T15:10:44.959Z
Learning: In the unkey/unkey repository, the metald service receives deployment_id values from upstream services rather than generating them internally. The deployment_id field in DeploymentRequest is part of a service-to-service communication pattern.
Applied to files:
go/apps/ctrl/services/deployment/create_deployment.go
🧬 Code graph analysis (3)
go/apps/api/routes/v2_keys_create_key/handler.go (3)
go/pkg/db/models_generated.go (2)
Identity(629-638)Permission(704-712)go/pkg/db/permission_find_by_slugs.sql_generated.go (1)
FindPermissionsBySlugsParams(17-20)go/pkg/db/role_find_by_names.sql_generated.go (2)
FindRolesByNamesRow(22-25)FindRolesByNamesParams(17-20)
go/apps/api/routes/v2_apis_list_keys/handler.go (3)
go/pkg/db/key_data.go (1)
KeyData(9-21)go/apps/api/openapi/gen.go (3)
KeyResponseData(216-250)Meta(279-282)Identity(161-173)go/pkg/db/models_generated.go (2)
Identity(629-638)Key(640-664)
go/pkg/testutil/seed/seed.go (2)
go/pkg/db/models_generated.go (1)
Key(640-664)go/gen/proto/vault/v1/service.pb.go (6)
EncryptResponse(156-162)EncryptResponse(175-175)EncryptResponse(190-192)EncryptRequest(104-110)EncryptRequest(123-123)EncryptRequest(138-140)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: Test Go API Local / Test
- GitHub Check: Build / Build
- GitHub Check: Test API / API Test Local
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (14)
go/apps/api/routes/v2_keys_reroll_key/handler.go (1)
300-308: LGTM! Appropriate lint suppression for partial struct initialization.The
exhaustructdirective is correctly applied here. TheUpdateKeyParamsstruct is designed for partial updates, and only the expiration-related fields need to be set when expiring the old key during reroll. Requiring all fields to be initialized would be incorrect for this use case.go/apps/api/routes/v2_keys_create_key/handler.go (4)
213-214: LGTM: Correct exhaustruct fix for identity lookup.Pre-declaring
identitysatisfies the exhaustruct linter by explicitly initializing all struct fields to zero values. This is functionally equivalent to the short declaration form and safe because the error is checked beforeidentityis used.
346-346: LGTM: Correct nullable field initialization.Explicitly initializing
UpdatedAtwithValid: falsecorrectly represents an unset nullable field for a new rate limit record. TheInt64: 0value satisfies exhaustruct and will be ignored by the database driver whenValidis false.
362-363: LGTM: Correct exhaustruct fix for permissions lookup.Pre-declaring
existingPermissionsas a slice (initialized to nil) is functionally equivalent to the short declaration form and satisfies exhaustruct requirements. The error check before usage ensures correctness.
474-475: LGTM: Correct exhaustruct fix for roles lookup.Pre-declaring
existingRolesfollows the same safe pattern as the permissions lookup, satisfying exhaustruct while maintaining functional equivalence to the short declaration form.go/pkg/testutil/seed/seed.go (7)
175-181: LGTM: UpdatedAt initialization for new key permission.Setting
UpdatedAtto an invalid null (Valid: false) with zero value is semantically correct for a newly created key permission that hasn't been updated yet.
243-248: LGTM: Explicit initialization of CreateKeyResponse.Initializing
RolesIdsandPermissionIdsas empty slices rather than nil is a good practice for consistency and prevents potential nil pointer issues in consuming code.
259-274: LGTM: Encryption handling remains correct.The explicit pointer declaration for
encryptionis a style change that doesn't affect functionality. The use ofGetEncrypted()andGetKeyId()accessor methods is correct for protobuf-generated types.
288-300: LGTM: UpdatedAt initialization for key permission.Consistent with the pattern established elsewhere in this file for newly created key permissions.
400-410: LGTM: UpdatedAt initialization for new identity.The
UpdatedAtfield is correctly set to an invalid null for a newly created identity. Note the same field ordering consideration mentioned in the CreateRatelimit review applies here as well.
447-455: LGTM: UpdatedAtM initialization for new role.The
UpdatedAtMfield is correctly initialized to an invalid null for a newly created role. The field name matches the database model convention.
482-491: LGTM: UpdatedAtM initialization for new permission.The
UpdatedAtMfield is correctly initialized to an invalid null for a newly created permission, consistent with the pattern used for other entities.go/apps/api/routes/v2_apis_list_keys/handler.go (2)
251-260: Explicit nil initialization satisfies exhaustruct linter.The explicit nil assignments for pointer fields are redundant in idiomatic Go (zero values are nil by default), but they satisfy the exhaustruct linter requirement, which aligns with the PR objectives. This is acceptable given the goal to enable golangci-lint.
232-233: Fix critical nil pointer dereference in ratelimits handling.The code at line 379-381 sets
response.Identity.Ratelimitswithout verifyingresponse.Identityis not nil. Sinceresponse.Identityis only initialized whenkeyData.Identity != nil(line 308), butidentityRatelimitscan be populated independently when ratelimit entries haveIdentityID.Valid == true, this will panic at runtime when identity ratelimits exist without a parent identity object.// Fix: Check response.Identity is not nil before setting ratelimits if len(identityRatelimits) > 0 { if response.Identity == nil { response.Identity = &openapi.Identity{ Meta: nil, Ratelimits: nil, Id: "", // or appropriate zero value ExternalId: "", // or appropriate zero value } } response.Identity.Ratelimits = &identityRatelimits }⛔ Skipped due to learnings
Learnt from: Flo4604 Repo: unkeyed/unkey PR: 3785 File: go/apps/api/routes/v2_keys_reroll_key/401_test.go:52-61 Timestamp: 2025-08-14T16:25:48.167Z Learning: User Flo4604 requested creation of a GitHub issue to track converting all test files to use table-driven test patterns as a broader codebase improvement, following the suggestion made during review of go/apps/api/routes/v2_keys_reroll_key/401_test.go.Learnt from: Flo4604 Repo: unkeyed/unkey PR: 4190 File: go/internal/services/keys/verifier.go:51-53 Timestamp: 2025-10-30T15:10:52.710Z Learning: PR #4190 for unkeyed/unkey is focused solely on database schema and query changes for identity-based credits. It adds IdentityCredits and KeyCredits fields to structs and queries, but does not implement the priority enforcement logic in the usagelimiter. The logic implementation is intentionally deferred to a later PR in the stack.Learnt from: Flo4604 Repo: unkeyed/unkey PR: 2955 File: go/apps/api/routes/v2_identities_create_identity/handler.go:162-202 Timestamp: 2025-03-19T09:25:59.751Z Learning: In the Unkey codebase, input validation for API endpoints is primarily handled through OpenAPI schema validation, which occurs before requests reach the handler code. For example, in the identities.createIdentity endpoint, minimum values for ratelimit duration and limit are defined in the OpenAPI schema rather than duplicating these checks in the handler.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (3)
go/pkg/codes/generate.go (1)
1-1: Consider a more targeted linting suppression or fixing the underlying issues.The
//nolint: alldirective disables all linting rules for the entire file, which is overly broad and prevents catching future issues. Since this is a code generation script, consider one of these approaches:
Preferred: Use specific suppressions for the actual issues (likely unchecked errors from
fmt.Fprintfandf.WriteStringcalls throughout the file)://nolint: errcheckAlternative: Address the underlying issues by checking errors properly in the code generation logic.
If build scripts are intentionally excluded from linting standards, document why blanket suppression is acceptable here.
go/cmd/api/main.go (2)
14-19: Consider using a linter directive instead of redundant zero-value initializations.Explicitly initializing struct fields to their zero values (
[]string{},"",[]*cli.Command{}) is redundant in Go, as uninitialized fields automatically receive zero values. This adds verbosity without conveying meaningful information.A cleaner approach would be to use a linter suppression comment:
var Cmd = &cli.Command{ - Aliases: []string{}, - Description: "", - Version: "", - Commands: []*cli.Command{}, + //nolint:exhaustruct // Aliases, Description, Version, Commands intentionally unset for this command Name: "api", Usage: "Run the Unkey API server for validating and managing API keys",This makes the intent clearer while keeping the code concise.
131-131: Clarify whetherCacheInvalidationTopicshould be configurable.This field is hardcoded to an empty string without a corresponding CLI flag (unlike other config fields such as
Platform,Image, andRegion). Additionally, whileKafkaBrokersis configured from flags (line 165), there's no corresponding topic configuration.If this field is not used in this context, consider using a linter directive instead:
config := api.Config{ // Basic configuration - CacheInvalidationTopic: "", + //nolint:exhaustruct // CacheInvalidationTopic not applicable for API command Platform: cmd.String("platform"),If it should be configurable, add a CLI flag similar to the Kafka brokers configuration (lines 79-80).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (19)
go/apps/api/integration/cluster/cache/consume_events_test.go(1 hunks)go/apps/api/integration/cluster/cache/e2e_test.go(0 hunks)go/apps/api/integration/cluster/cache/produce_events_test.go(2 hunks)go/apps/api/integration/harness.go(3 hunks)go/cmd/api/main.go(2 hunks)go/pkg/cache/clustering/cluster_cache.go(2 hunks)go/pkg/cache/clustering/consume_events_test.go(1 hunks)go/pkg/cache/clustering/dispatcher.go(2 hunks)go/pkg/cache/clustering/e2e_test.go(2 hunks)go/pkg/cache/clustering/noop.go(1 hunks)go/pkg/cache/clustering/produce_events_test.go(1 hunks)go/pkg/codes/generate.go(1 hunks)go/pkg/db/key_data.go(3 hunks)go/pkg/eventstream/consumer.go(4 hunks)go/pkg/eventstream/eventstream_integration_test.go(5 hunks)go/pkg/eventstream/noop.go(2 hunks)go/pkg/eventstream/producer.go(2 hunks)go/pkg/eventstream/topic.go(2 hunks)go/pkg/testutil/http.go(1 hunks)
💤 Files with no reviewable changes (1)
- go/apps/api/integration/cluster/cache/e2e_test.go
✅ Files skipped from review due to trivial changes (1)
- go/pkg/cache/clustering/noop.go
🚧 Files skipped from review as they are similar to previous changes (1)
- go/pkg/db/key_data.go
🧰 Additional context used
🧠 Learnings (5)
📓 Common learnings
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3755
File: .github/workflows/job_detect_changes.yaml:111-114
Timestamp: 2025-08-08T16:10:00.224Z
Learning: Repo: unkeyed/unkey — When imeyer writes “make me an issue” on a PR thread, automatically open a thorough GitHub issue (Summary, Impact, Where, Repro, Observed vs Expected, Acceptance Criteria, Validation Plan, Out of Scope, References), include backlinks to the PR and comment, and assign to imeyer.
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3755
File: .github/actions/setup-node/action.yaml:0-0
Timestamp: 2025-08-08T15:10:46.436Z
Learning: Repo: unkeyed/unkey — Preference: If imeyer comments “issue” on a PR thread, automatically open a thorough GitHub issue (Summary, Impact, Where, Observed vs Expected, Acceptance Criteria, Validation Plan, Out of Scope, References), include backlinks to the PR and comment, and assign to imeyer.
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3755
File: .github/workflows/runbook-freshness-check.yaml:157-173
Timestamp: 2025-08-08T14:59:52.283Z
Learning: Repo unkeyed/unkey: When a CI/workflow fix is deferred, imeyer prefers a thorough GitHub issue be opened with sections (Summary, Impact, Where, Repro, Observed vs Expected, Acceptance Criteria, Validation Plan, Out of Scope, References) and assigned to imeyer, including backlinks to the originating PR and comment.
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3785
File: go/apps/api/routes/v2_keys_reroll_key/401_test.go:52-61
Timestamp: 2025-08-14T16:25:48.167Z
Learning: User Flo4604 requested creation of a GitHub issue to track converting all test files to use table-driven test patterns as a broader codebase improvement, following the suggestion made during review of go/apps/api/routes/v2_keys_reroll_key/401_test.go.
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3755
File: .github/actions/setup-node/action.yaml:0-0
Timestamp: 2025-08-08T15:10:46.436Z
Learning: For repo unkeyed/unkey and PR review workflows: When imeyer comments "issue" on a thread, automatically create a thorough GitHub issue (sections: Summary, Impact, Where, Repro/Observed vs Expected, Acceptance Criteria, Validation Plan, Out of Scope, References), include backlinks to the PR and the specific comment, and assign the issue to imeyer.
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 3297
File: apps/dashboard/lib/trpc/routers/authorization/roles/query.ts:210-323
Timestamp: 2025-06-04T20:13:12.060Z
Learning: The user ogzhanolguncu prefers explicit, duplicated code over abstracted helper functions when it improves readability, even if it means some duplication in filter building functions in the authorization roles query module.
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 2825
File: apps/dashboard/app/(app)/logs-v2/hooks/use-bookmarked-filters.ts:0-0
Timestamp: 2025-01-30T20:51:44.359Z
Learning: The user (ogzhanolguncu) prefers to handle refactoring suggestions in separate PRs to maintain focus in the current PR.
📚 Learning: 2025-07-21T18:13:18.938Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3638
File: go/pkg/cache/clustering/cluster_cache.go:108-129
Timestamp: 2025-07-21T18:13:18.938Z
Learning: In Go distributed cache implementations using Kafka event streams, producer instances should be created once during initialization and reused rather than creating new producers for each broadcast operation. This avoids unnecessary overhead and improves performance under high load. Apply this pattern when wrapping local caches with clustering capabilities that broadcast invalidation events.
Applied to files:
go/pkg/eventstream/producer.gogo/apps/api/integration/cluster/cache/produce_events_test.gogo/pkg/cache/clustering/cluster_cache.gogo/pkg/eventstream/eventstream_integration_test.go
📚 Learning: 2025-09-12T08:01:20.792Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3944
File: go/pkg/db/acme_challenge_update_verified_with_expiry.sql_generated.go:31-39
Timestamp: 2025-09-12T08:01:20.792Z
Learning: Do not review or suggest changes to files with sql_generated.go suffix or other files marked as auto-generated (containing "Code generated by" comments), as these are generated by tools like sqlc and changes would be overwritten on regeneration.
Applied to files:
go/pkg/codes/generate.go
📚 Learning: 2025-08-14T16:41:57.374Z
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3785
File: go/pkg/db/key_data.go:32-52
Timestamp: 2025-08-14T16:41:57.374Z
Learning: Using unsafe.Pointer to cast between different struct types (even with identical fields) in Go can cause memory corruption because the runtime may arrange identical field layouts differently in memory due to alignment, padding, or compiler optimizations. Always use explicit type handling or safe value copies instead.
Applied to files:
go/pkg/eventstream/consumer.go
📚 Learning: 2025-08-29T13:48:43.353Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3884
File: go/apps/api/routes/v2_ratelimit_delete_override/handler.go:218-228
Timestamp: 2025-08-29T13:48:43.353Z
Learning: In the unkeyed/unkey codebase, when working with ratelimit namespace caching, req.Namespace parameter is either the namespace ID or the namespace name, so cache invalidation only needs to remove cache keys for namespace.ID and namespace.Name - no need to also remove req.Namespace as a separate key.
Applied to files:
go/pkg/cache/clustering/produce_events_test.go
🧬 Code graph analysis (6)
go/apps/api/integration/cluster/cache/consume_events_test.go (1)
go/pkg/debug/header.go (2)
CacheHeader(11-15)ParseCacheHeader(32-53)
go/pkg/testutil/http.go (3)
go/pkg/cache/clustering/cluster_cache.go (1)
Config(34-58)go/pkg/zen/server.go (1)
Config(43-57)go/apps/api/config.go (1)
Config(23-105)
go/pkg/cache/clustering/e2e_test.go (3)
go/pkg/cache/interface.go (1)
Cache(7-35)go/pkg/cache/clustering/cluster_cache.go (3)
New(62-139)Config(34-58)ClusterCache(18-31)go/pkg/cache/cache.go (2)
New(56-98)Config(34-51)
go/apps/api/integration/cluster/cache/produce_events_test.go (1)
go/gen/proto/cache/v1/invalidation.pb.go (3)
CacheInvalidationEvent(25-37)CacheInvalidationEvent(50-50)CacheInvalidationEvent(65-67)
go/cmd/api/main.go (1)
go/pkg/cli/command.go (1)
Command(24-40)
go/pkg/cache/clustering/cluster_cache.go (1)
go/pkg/eventstream/topic.go (1)
Topic(31-41)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Test Go API Local / Test
- GitHub Check: Test API / API Test Local
- GitHub Check: Build / Build
🔇 Additional comments (25)
go/pkg/testutil/http.go (2)
76-81: LGTM! Explicit zero values for exhaustruct compliance.The explicit nil and empty string values for
CacheInvalidationTopicandNodeIDare appropriate for a test harness where distributed cache invalidation isn't required. These changes make the struct initialization complete for the exhaustruct linter without altering test behavior.
84-86: LGTM! Explicit zero value for exhaustruct compliance.Setting
MaxRequestBodySize: 0explicitly makes the struct initialization complete for the exhaustruct linter. Per the zen.Config documentation, 0 means "no limit," which is appropriate for test scenarios.go/pkg/eventstream/noop.go (1)
50-58: LGTM! Explicit initialization satisfies exhaustruct linter.The explicit zero-value initialization (empty
Mutex{},nilslices) is verbose but acceptable for satisfying the exhaustruct linter. In Go, these fields would default to their zero values, but the explicit initialization makes the linter happy without changing behavior.go/pkg/eventstream/producer.go (2)
49-50: LGTM! Appropriate linter suppression for external library struct.The
exhaustructsuppression is appropriate here sincekafka.Writeris an external library struct with many fields, and explicitly initializing all fields would be impractical and reduce readability.
142-143: LGTM! Appropriate linter suppression for Kafka message construction.The
exhaustructsuppression is appropriate forkafka.Messageas it's an external struct and only the required fields (Value, Headers) need to be set.go/pkg/eventstream/topic.go (2)
70-78: LGTM! Explicit initialization satisfies exhaustruct linter.Similar to the noop implementation, this explicit zero-value initialization (empty
Mutex{},nilslices) satisfies the exhaustruct linter without changing behavior.
135-141: LGTM! Explicit nil fields for exhaustruct compliance.Setting
ReplicaAssignmentsandConfigEntriestonilexplicitly satisfies the exhaustruct linter. These fields are optional in Kafka's topic configuration, andnilis the appropriate value when not customizing replica assignments or config entries.go/pkg/eventstream/consumer.go (4)
74-76: LGTM! Explicit initialization satisfies exhaustruct linter.Explicitly setting
fromBeginning: falsesatisfies the linter requirement without changing the default behavior.
92-100: LGTM! Appropriate linter suppression and pointer type optimization.The
exhaustructsuppression is appropriate, and caching theisPointerTypecheck at consumer creation is a good optimization to avoid repeated reflection on every message.
190-196: LGTM! Appropriate linter suppression for external library struct.The
exhaustructsuppression is appropriate forkafka.ReaderConfigas it's an external struct with optional fields.
228-239: Question: Can the type assertion actually fail here?The type assertion at line 234 checks if
newInstance.(T)succeeds, but sincenewInstanceis constructed asreflect.New(reflect.TypeOf(t).Elem()).Interface(), wheretis of typeT, the assertion should always succeed. The error handling seems unreachable.Is there a specific scenario where this type assertion could fail? If not, consider simplifying to:
if c.isPointerType { t = reflect.New(reflect.TypeOf(t).Elem()).Interface().(T) }This would make the code cleaner and avoid the unreachable error path.
go/apps/api/integration/cluster/cache/produce_events_test.go (3)
108-110: LGTM! Correct protobuf getter usage.Switching from direct field access to getter methods (
GetCacheName(),GetCacheKey(),GetSourceInstance()) is the correct pattern for accessing protobuf-generated struct fields.
118-124: LGTM! Good naming fix and consistent getter usage.The variable rename from
apiByIdEventtoapiByIDEventfollows Go naming conventions (ID should be capitalized), and the getter method usage is correct.
128-132: LGTM! Consistent protobuf getter usage in assertions.All assertions correctly use getter methods to access event fields.
go/pkg/cache/clustering/cluster_cache.go (2)
97-110: LGTM! Explicit initialization satisfies exhaustruct linter.Explicitly initializing fields (including
nilforproducerandbatchProcessor) satisfies the linter without changing behavior. The fields are properly initialized on lines 113 and 117 respectively.
200-227: LGTM! Correct protobuf getter usage.Switching to getter methods (
GetSourceInstance(),GetCacheName(),GetCacheKey()) is the correct pattern for accessing protobuf-generated struct fields throughout the invalidation handling logic.go/pkg/cache/clustering/consume_events_test.go (1)
82-90: LGTM! Correct protobuf getter usage in test.The test correctly uses getter methods (
GetCacheName(),GetCacheKey()) to access protobuf event fields.go/pkg/cache/clustering/dispatcher.go (2)
46-51: LGTM! Explicit initialization satisfies exhaustruct linter.Explicitly initializing
muandconsumer(the latter tonil) satisfies the linter. Theconsumeris properly initialized on line 53.
61-73: LGTM! Correct protobuf getter usage.Using
GetCacheName()is the correct pattern for accessing the protobuf-generated struct field.go/pkg/cache/clustering/e2e_test.go (1)
59-60: Clarify the rationale for explicit variable pre-declaration.The changes replace idiomatic short variable declarations (
:=) with explicit pre-declaration followed by assignment. This pattern doesn't appear to address exhaustruct warnings (which relate to uninitialized struct fields) and makes the code more verbose.Could you clarify which specific linter rule required this change? If this is not addressing a linter issue, consider reverting to the standard Go idiom:
- var localCache cache.Cache[string, string] - localCache, err = cache.New(cache.Config[string, string]{ + localCache, err := cache.New(cache.Config[string, string]{- var clusterCache *clustering.ClusterCache[string, string] - clusterCache, err = clustering.New(clustering.Config[string, string]{ + clusterCache, err := clustering.New(clustering.Config[string, string]{Also applies to: 72-73
go/pkg/cache/clustering/produce_events_test.go (1)
120-136: LGTM! Proper use of protobuf getters.The migration from direct field access to protobuf getter methods (
GetCacheName(),GetCacheKey(),GetSourceInstance()) is the correct approach. Getters provide nil-safety and are the recommended pattern for accessing protobuf message fields.go/pkg/eventstream/eventstream_integration_test.go (2)
75-75: LGTM! Consistent protobuf getter usage.All log statements and assertions have been correctly migrated to use protobuf getter methods (
GetCacheName(),GetCacheKey(),GetTimestamp(),GetSourceInstance()). This ensures nil-safety and follows protobuf best practices.Also applies to: 91-91, 103-106, 147-147, 150-150
163-163: LGTM! Modern Go range syntax.Using
for i := range numMessagesis the idiomatic Go 1.22+ approach for iterating over a numeric range. This is cleaner and more expressive than the traditional C-style loop.go/apps/api/integration/harness.go (2)
130-130: Appropriate nolint for test harness.The
gosecsuppression for ephemeral port allocation is justified in this integration test context. Using:0allows the OS to assign an available port, preventing conflicts during parallel test execution.
142-145: No issues found—all test configuration values are appropriate.The zero values are correctly configured for integration test execution:
- HttpPort: 7070: Correct. An ephemeral listener is provided (line 131, assigned at line 148), so this field is ignored per run.go:291–293.
- MaxRequestBodySize: 0: Correct. Per config.go:97–98, zero means no limit is enforced—appropriate for tests.
- CacheInvalidationTopic: "": Correct. Kafka brokers are provisioned (line 140, assigned line 163), so the empty string defaults to
DefaultCacheInvalidationTopicper run.go:232–234.- ChproxyToken: "": Correct. Empty string disables chproxy features per routes/register.go:85.
What does this PR do?
This PR fixes golang linter issues. By the way most of them are
exhaustructissues and some of them are intentionally skipped due to either being too risky to change or too much missing struct element.Fixes #4066
If there is not an issue for this, please create one first. This is used to tracking purposes and also helps use understand why this PR exists
Type of change
How should this be tested?
make fmtis passing both locally and on CIChecklist
Required
pnpm buildpnpm fmtconsole.logsgit pull origin mainAppreciated