Conversation
|
|
The latest updates on your projects. Learn more about Vercel for GitHub. |
|
Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. Important Review skippedMore than 25% of the files skipped due to max files limit. The review is being skipped to prevent a low-quality review. 58 files out of 165 files are above the max files limit of 100. Please upgrade to Pro plan to get higher limits. You can disable this status message by setting the 📝 WalkthroughWalkthroughLarge refactor renaming Version→Deployment, introducing Environments, replacing hostname_routes→domains and domain_challenges→acme_challenges, migrating DB schema/queries/models, updating protobufs/GRPC services, changing ctrl ACME/S3 wiring, and adapting dashboard TRPC routes and CLI to environment/deployment IDs. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Client
participant DashboardAPI as Dashboard TRPC
participant DB
participant Audit as Audit Log
Client->>DashboardAPI: createProject(...)
activate DashboardAPI
DashboardAPI->>DB: tx: insert project
DB-->>DashboardAPI: projectId
DashboardAPI->>DB: insert environment (production)
DashboardAPI->>Audit: emit environment.create (production)
DashboardAPI->>DB: insert environment (preview)
DashboardAPI->>Audit: emit environment.create (preview)
DashboardAPI-->>Client: { projectId, environments }
deactivate DashboardAPI
sequenceDiagram
autonumber
participant CLI as Deploy CLI
participant Ctrl as DeploymentService
participant DB
participant Hydra as Workflow Engine
participant Metal as VM Provisioner
participant GW as Gateway
CLI->>Ctrl: CreateDeployment(envId, ...)
activate Ctrl
Ctrl->>DB: insert deployment (environment_id, runtime_config)
DB-->>Ctrl: deploymentId
Ctrl->>Hydra: start deployment workflow (deploymentId)
Hydra->>Metal: Create VM (deployment_id, resources)
Metal-->>Hydra: returned address
Hydra->>DB: Upsert VM (address)
Hydra->>DB: Insert Domains (domain, deployment_id)
Hydra-->>Ctrl: step/status updates
Ctrl-->>CLI: response (deploymentId, status)
GW->>GW: route using VM.Address and Deployment.Id
deactivate Ctrl
Estimated code review effort🎯 5 (Critical) | ⏱️ ~180+ minutes Possibly related PRs
Suggested labels
✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. CodeRabbit Commands (Invoked using PR/Issue comments)Type Other keywords and placeholders
CodeRabbit Configuration File (
|
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 106
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (62)
apps/dashboard/lib/trpc/routers/deployment/getOpenApiDiff.ts (2)
155-159: Bug: merge order overrides your normalizedchangesSpreading
diffDataafter settingchangeswill clobber the normalization. Reverse the order.- const normalizedDiff = { - changes: Array.isArray(diffData?.changes) ? diffData.changes : [], - ...diffData, - }; + const normalizedDiff = { + ...diffData, + changes: Array.isArray(diffData?.changes) ? diffData.changes : [], + };
96-109: Avoid hardcoded control-plane URL; confirm request field namesKeep the mock fallback (per prior preference), but read the base URL from env to make the demo portable. Also, the body uses
old_version_id/new_version_id; confirm server expects “version” vs updated “deployment” naming.- const response = await fetch( - "http://localhost:7091/ctrl.v1.OpenApiService/GetOpenApiDiff", - { + const response = await fetch( + `${process.env.CTRL_URL ?? "http://localhost:7091"}/ctrl.v1.OpenApiService/GetOpenApiDiff`, + { method: "POST", headers: { "Content-Type": "application/json", }, body: JSON.stringify({ - old_version_id: input.oldDeploymentId, - new_version_id: input.newDeploymentId, + old_version_id: input.oldDeploymentId, // TODO: switch to *_deployment_id when backend updates + new_version_id: input.newDeploymentId, }), }, );go/apps/gw/services/validation/validator.go (1)
62-87: Fix pointer-to-interface and stale-cache risk; include spec hash in cache key
- Returning
*validator.Validatoryields a pointer-to-interface; dereferencing with(*v)is unidiomatic.- Cache key uses only deployment ID; spec updates for the same deployment won’t bust the cache.
- v, err := s.getOrCreateValidator(ctx, config.Deployment.Id, config.ValidationConfig.OpenapiSpec) + v, err := s.getOrCreateValidator(ctx, config.Deployment.Id, config.ValidationConfig.OpenapiSpec) if err != nil { @@ - valid, errors := (*v).ValidateHttpRequest(req) + valid, errors := v.ValidateHttpRequest(req)-func (s *Service) getOrCreateValidator(ctx context.Context, deploymentID string, spec string) (*validator.Validator, error) { +func (s *Service) getOrCreateValidator(ctx context.Context, deploymentID string, spec string) (validator.Validator, error) { @@ - cacheKey := fmt.Sprintf("validator:%s", deploymentID) + // include a stable hash of spec content to invalidate cache on spec changes + sum := sha256.Sum256([]byte(spec)) + cacheKey := fmt.Sprintf("validator:%s:%x", deploymentID, sum) @@ - cachedValidator, cacheResult := s.openapiSpecCache.Get(ctx, cacheKey) + cachedValidator, cacheResult := s.openapiSpecCache.Get(ctx, cacheKey) @@ - return &cachedValidator, nil + return cachedValidator, nil @@ - v, validationErrors := validator.NewValidator(document) + v, validationErrors := validator.NewValidator(document) @@ - return nil, fault.Wrap(err, + return nil, fault.Wrap(err, @@ - return nil, fault.New("failed to create validator", messages...) + return nil, fault.New("failed to create validator", messages...) @@ - if !valid { + if !valid { @@ - return nil, fault.New("OpenAPI document is invalid", messages...) + return nil, fault.New("OpenAPI document is invalid", messages...) @@ - s.openapiSpecCache.Set(ctx, cacheKey, v) + s.openapiSpecCache.Set(ctx, cacheKey, v) @@ - return &v, nil + return v, nilAdditional required imports:
// add to imports import ( "crypto/sha256" )If the cache already keys by spec elsewhere, we can align with that approach instead.
Also applies to: 107-165
apps/dashboard/lib/trpc/routers/deploy/project/list.ts (3)
188-193: Only emit nextCursor when there is a next page.Setting nextCursor even when hasMore is false encourages no-op fetches.
- nextCursor: projects.length > 0 ? projects[projects.length - 1].updatedAt : null, + nextCursor: hasMore && projects.length > 0 + ? projects[projects.length - 1].updatedAt + : null,
66-74: Treat “contains” literally; escape LIKE wildcards.User-supplied values containing % or _ act as wildcards. If “contains” should be a literal substring, escape them and use ESCAPE.
- const searchValue = `%${filter.value}%`; + const escapeLike = (s: string) => s.replaceAll("%", "\\%").replaceAll("_", "\\_"); + const searchValue = `%${escapeLike(filter.value)}%`;Also ensure your like() helper binds ESCAPE '\' as needed.
Also applies to: 76-89
113-115: Add secondary sort and compound cursor predicate for stable keyset pagination
- Change ordering to include the unique
idtie-breaker:- orderBy: [desc(schema.projects.updatedAt)], + orderBy: [desc(schema.projects.updatedAt), desc(schema.projects.id)],- Replace the single‐field cursor check with a compound predicate:
- if (input.cursor && typeof input.cursor === "number") { - baseConditions.push(lt(schema.projects.updatedAt, input.cursor)); - } + if (input.cursor != null && typeof input.cursor === "number" && input.lastId) { + baseConditions.push( + or( + lt(schema.projects.updatedAt, input.cursor), + and( + eq(schema.projects.updatedAt, input.cursor), + lt(schema.projects.id, input.lastId), + ), + ), + ); + }- Extend the input schema (
apps/dashboard/app/(app)/projects/_components/list/projects-list.schema.ts) to accept the tie-breaker ID:export const projectsQueryPayload = baseProjectsSchema.extend({ cursor: z.number().nullish(), lastId: z.string().nullish(), });go/apps/gw/services/auth/auth.go (2)
60-76: Parse Authorization header case-insensitively and robustly.The scheme is case-insensitive per RFC; current check rejects "bearer". Also TrimPrefix fails on extra spaces.
- authHeader := req.Header.Get("Authorization") + authHeader := req.Header.Get("Authorization") if authHeader == "" { ... } - if !strings.HasPrefix(authHeader, "Bearer ") { + parts := strings.Fields(authHeader) + if len(parts) != 2 || !strings.EqualFold(parts[0], "Bearer") { ... } - apiKey := strings.TrimPrefix(authHeader, "Bearer ") + apiKey := parts[1]
95-104: Avoid potential panic: defer emit() only when err == nil.If keys.Get returns a nil emitter on error, deferring emit() before checking err could panic.
- key, emit, err := a.keys.Get(ctx, &z, apiKey) - defer emit() - if err != nil { + key, emit, err := a.keys.Get(ctx, &z, apiKey) + if err != nil { return fault.Wrap(err, ... ) } + defer emit()go/pkg/db/bulk_deployment_step_insert.sql.go (1)
17-45: Add batching to avoid oversized SQL statements for large slices.Large arg sets can exceed max_allowed_packet or parameter limits; batch inserts (e.g., 500–1k rows).
Example refactor (illustrative):
const maxRowsPerBatch = 1000 for start := 0; start < len(args); start += maxRowsPerBatch { end := start + maxRowsPerBatch if end > len(args) { end = len(args) } batch := args[start:end] valueClauses := make([]string, len(batch)) for i := range batch { valueClauses[i] = "( ?, ?, ?, ?, ?, ? )" } bulkQuery := fmt.Sprintf(bulkInsertDeploymentStep, strings.Join(valueClauses, ", ")) allArgs := make([]any, 0, len(batch)*6) for _, arg := range batch { allArgs = append(allArgs, arg.WorkspaceID, arg.ProjectID, arg.DeploymentID, arg.Status, arg.Message, arg.CreatedAt) } if _, err := db.ExecContext(ctx, bulkQuery, allArgs...); err != nil { return err } } return nilgo/pkg/db/deployment_insert.sql_generated.go (1)
85-99: Guard against oversized runtime_config payloads at call sites.Consider validating JSON size/schema before insert to prevent bloated rows.
go/apps/gw/services/routing/service.go (2)
122-127: Filter out VMs with no usable address.Skip VMs where Address.Valid is false or the string is empty/whitespace.
if vm.Status != db.VmsStatusRunning { continue } - availableVms = append(availableVms, vm) + if !vm.Address.Valid || len(strings.TrimSpace(vm.Address.String)) == 0 { + continue + } + availableVms = append(availableVms, vm)Add import:
import "strings"
136-141: Avoid double-scheme and handle IPv6 correctly when building the URL.Prepending "http://" blindly breaks if Address already contains a scheme and doesn’t handle IPv6 bracket formatting. Build the URL via net/url.
-fullUrl := fmt.Sprintf("http://%s", selectedVM.Address.String) - -targetURL, err := url.Parse(fullUrl) -if err != nil { - return nil, fmt.Errorf("invalid VM URL %s: %w", fullUrl, err) -} +addr := strings.TrimSpace(selectedVM.Address.String) +var targetURL *url.URL +if strings.Contains(addr, "://") { + u, err := url.Parse(addr) + if err != nil || u.Host == "" { + return nil, fmt.Errorf("invalid VM address %q: %w", addr, err) + } + targetURL = u +} else { + targetURL = &url.URL{Scheme: "http", Host: addr} +}Also add:
import "strings"internal/db/drizzle/meta/0000_snapshot.json (5)
2246-2325: Domains: add lookup and uniqueness guarantees.
- Missing an index on domain; common hot path for routing and ACME.
- Clarify uniqueness: global vs per-workspace. Add the corresponding unique constraint.
"indexes": { "workspace_idx": { "name": "workspace_idx", "columns": ["workspace_id"], "isUnique": false }, "project_idx": { "name": "project_idx", "columns": ["project_id"], "isUnique": false - } + }, + "domain_idx": { + "name": "domain_idx", + "columns": ["domain"], + "isUnique": false + } }, ... - "uniqueConstraints": {} + "uniqueConstraints": { + "unique_domain_per_workspace_idx": { + "name": "unique_domain_per_workspace_idx", + "columns": ["workspace_id", "domain"] + } + }If domains must be globally unique, switch the unique constraint to columns: ["domain"].
2327-2415: ACME challenges: enforce token uniqueness and add lookup index.Prevent duplicate challenges and speed up lookups.
"indexes": { "workspace_idx": { "name": "workspace_idx", "columns": ["workspace_id"], "isUnique": false - } + }, + "domain_token_idx": { + "name": "domain_token_idx", + "columns": ["domain_id", "token"], + "isUnique": true + } }, ... - "uniqueConstraints": {} + "uniqueConstraints": { + "unique_domain_token": { + "name": "unique_domain_token", + "columns": ["domain_id", "token"] + } + }
2019-2126: Deployments: add environment_id index (and optionally a composite for latest-by-status).Typical queries fetch the latest READY deployment by environment.
"indexes": { "workspace_idx": { "name": "workspace_idx", "columns": ["workspace_id"], "isUnique": false }, "project_idx": { "name": "project_idx", "columns": ["project_id"], "isUnique": false }, "status_idx": { "name": "status_idx", "columns": ["status"], "isUnique": false - } + }, + "environment_idx": { + "name": "environment_idx", + "columns": ["environment_id"], + "isUnique": false + } },Optional composite (if used frequently):
- ["environment_id", "status", "created_at DESC"]
2184-2244: Index name nit: acme_users “domain_idx” actually indexes workspace_id.Rename to avoid confusion.
- "domain_idx": { - "name": "domain_idx", + "workspace_idx": { + "name": "workspace_idx", "columns": ["workspace_id"], "isUnique": false }
1852-1918: Environments: nice unique(workspace_id, slug). Consider foreign key or soft reference from keys.environment.If keys.environment is meant to tie to environments.slug, add an index on keys(environment) (exists) and document the expected relation, or add an FK if acceptable.
go/pkg/db/acme_challenge_list_executable.sql_generated.go (2)
13-17: Qualify/alias domain to avoid ambiguity and future breakage.Be explicit to prevent conflicts if
acme_challengesever gains adomaincolumn. Apply in the source SQL (not this generated file).-SELECT dc.id, dc.workspace_id, domain FROM acme_challenges dc +SELECT dc.id, dc.workspace_id, d.domain AS domain FROM acme_challenges dc
13-17: Add missing indexes and paginate the ACME challenge query
- Apply
LIMIT/OFFSETto bound batch size.- Create a composite index on
acme_challenges(status, expires_at).- Add an index on
domains(created_at).- Confirmed the statuses
"waiting"and"verified"exist inAcmeChallengesStatus.go/pkg/db/bulk_project_insert.sql.go (1)
29-41: Preallocate allArgs capacity to cut reallocs on large bulks.Minor perf win; avoids repeated growth.
- var allArgs []any + // 9 args per row + allArgs := make([]any, 0, len(args)*9)go/pkg/db/queries/domain_insert.sql (1)
1-23: Add updated_at to the INSERT and use VALUES(updated_at) in the upsert. domains.updated_at is nullable with no default, so initial inserts will set it to NULL unless you supply it.
go/pkg/db/queries/domain_insert.sql: includeupdated_atin both the column list and VALUES, and changeON DUPLICATE KEY UPDATE updated_at = ?toupdated_at = VALUES(updated_at).deployment/docker-compose.yaml (1)
151-159: Console port mismatch for MinIOYou configure MINIO_CONSOLE_PORT_NUMBER=3903 but map 2903:2903. Map 3903 instead; otherwise the console won’t be reachable.
- - 2903:2903 + - 3903:3903apps/dashboard/lib/trpc/routers/deploy/project/create.ts (2)
39-61: Close TOCTOU on project slug check.The pre-insert read can race. Rely on a DB unique constraint for (workspace_id, slug) and translate duplicate-key to TRPC CONFLICT.
I can add a small helper to map driver error codes to TRPCError(CONFLICT).
142-148: Don’t fabricate createdAt; fetch from DB or omit.Returning Date.now() can drift from DB time and type. Either fetch via returning()/select-after-insert or omit createdAt.
Apply:
return { id: projectId, name: input.name, slug: input.slug, - gitRepositoryUrl: input.gitRepositoryUrl, - createdAt: now, + gitRepositoryUrl: input.gitRepositoryUrl ?? null, };go/pkg/db/bulk_deployment_insert.sql.go (2)
21-47: Add chunking to avoid parameter/packet limits on large batches.
Single Exec can exceed driver/DB limits (e.g., ~65k params, max_allowed_packet). Insert in chunks.Example wrapper (non-generated file) to keep generators untouched:
// in go/pkg/db/bulk_helpers.go (new, non-generated) package db import "context" const deploymentCols = 11 func (q *BulkQueries) InsertDeploymentsChunked(ctx context.Context, db DBTX, args []InsertDeploymentParams, maxParams int) error { if len(args) == 0 { return nil } if maxParams <= 0 { maxParams = 60000 // conservative default } chunkSize := maxParams / deploymentCols if chunkSize <= 0 { chunkSize = 1 } for start := 0; start < len(args); start += chunkSize { end := start + chunkSize if end > len(args) { end = len(args) } if err := q.InsertDeployments(ctx, db, args[start:end]); err != nil { return err } } return nil }
30-31: Preallocate allArgs to reduce allocations.Apply:
- var allArgs []any + allArgs := make([]any, 0, len(args)*11)go/pkg/db/bulk_acme_challenge_insert.sql.go (1)
30-31: Preallocate allArgs capacity.Apply:
- var allArgs []any + allArgs := make([]any, 0, len(args)*9)go/pkg/db/bulk_domain_insert.sql.go (1)
34-44: Preallocate args capacity to reduce allocations.Small perf win when inserting many rows.
- var allArgs []any + // 7 values per row + 1 for the ON DUPLICATE updated_at + allArgs := make([]any, 0, len(args)*7+1)go/cmd/deploy/main.go (5)
324-332: Use strings.HasPrefix for clarityManual slicing is brittle; prefer stdlib.
- for key, next := range stepSequence { - if len(currentMessage) >= len(key) && currentMessage[:len(key)] == key { - return next - } - } + for key, next := range stepSequence { + if strings.HasPrefix(currentMessage, key) { + return next + } + }
131-156: Update user-facing copy from Version→DeploymentCLI text still says “version” in multiple places (usage, description, steps, examples).
- Usage: "Deploy a new version or initialize configuration", + Usage: "Deploy a new deployment or initialize configuration", @@ - Build and deploy a new version of your application, + Build and deploy your application, @@ - 4. Create deployment version on Unkey platform - 5. Monitor deployment status until active + 4. Create deployment on Unkey platform + 5. Monitor deployment status until ready
404-415: Label says “Domains” but variable named hostnamesMinor naming mismatch; consider renaming local var to domains for consistency with the new model.
389-416: Usedeployment.GetEnvironmentId()instead of the hardcoded default
Replace the hardcodedDefaultEnvironmentwith the value fromdeployment.GetEnvironmentId(), falling back only if it’s empty. There is noGetEnvironmentName()method onctrlv1.Deployment. For example:- fmt.Printf(" %s: %s\n", CompletionEnvironment, DefaultEnvironment) + env := deployment.GetEnvironmentId() + if env == "" { + env = DefaultEnvironment + } + fmt.Printf(" %s: %s\n", CompletionEnvironment, env)
71-80: Update stepSequence to match current ctrl event strings
Mapping keys reference outdated messages and will never match the actual events (breaking next-step UX). In go/cmd/deploy/main.go adjust as follows:var stepSequence = map[string]string{ - "Version queued and ready to start": "Downloading Docker image:", + "Deployment queued and ready to start": "Downloading Docker image:", … - "Creating VM for deployment:": "VM booted successfully:", + "Creating VM for version:": "VM booted successfully:", - "VM booted successfully:": "Assigned hostname:", - "Assigned hostname:": MsgDeploymentStepCompleted, + "VM booted successfully:": "Assigned hostnames:", + "Assigned hostnames:": MsgDeploymentStepCompleted, + "Domain assignment completed": MsgDeploymentStepCompleted, }go/pkg/partition/db/schema.sql (2)
22-38: Broken indexes/constraints reference dropped VM columnsThe VM schema removed region/private_ip/port/health fields, but indexes still reference them. This migration will fail to apply.
Replace obsolete indexes with ones matching the new columns:
CREATE TABLE vms ( @@ - INDEX idx_deployment_available (deployment_id, region, status), - INDEX idx_deployment_health (deployment_id, health_status, last_heartbeat), - INDEX idx_host_id (metal_host_id), - INDEX idx_region (region), - INDEX idx_status (status), - UNIQUE KEY unique_ip_port (private_ip, port) + INDEX idx_deployment_status (deployment_id, status), + INDEX idx_host_id (metal_host_id), + INDEX idx_status (status), + UNIQUE KEY unique_address (address) );Also ensure the corresponding drizzle migration drops the old indexes before creating the new ones.
10-17: Add an index for gateways.workspace_idLookups are likely workspace-scoped. Add a secondary index for workspace_id to avoid full scans.
CREATE TABLE gateways ( `id` bigint unsigned NOT NULL AUTO_INCREMENT, `workspace_id` varchar(255) NOT NULL, `hostname` varchar(255) NOT NULL, `config` blob NOT NULL, -- Protobuf with all configuration including deployment_id, workspace_id PRIMARY KEY (`id`), - UNIQUE KEY `gateways_pk` (`hostname`) + UNIQUE KEY `gateways_pk` (`hostname`), + KEY `idx_gateways_workspace` (`workspace_id`) )go/cmd/ctrl/main.go (1)
6-11: Avoid package name shadowing for readability.This file is in package ctrl and also imports apps/ctrl as ctrl. Use an alias (e.g., appctrl) to reduce confusion.
-import ( - "context" - - "github.com/unkeyed/unkey/go/apps/ctrl" +import ( + "context" + + appctrl "github.com/unkeyed/unkey/go/apps/ctrl"And update references:
ctrl.Config→appctrl.Config,ctrl.Run→appctrl.Run.go/apps/ctrl/run.go (2)
199-207: Gate ACME service handler behind cfg.AcmeEnabled.Avoid exposing endpoints when ACME is disabled.
Apply this diff:
- mux.Handle(ctrlv1connect.NewAcmeServiceHandler(acme.New(acme.Config{ - PartitionDB: partitionDB, - DB: database, - HydraEngine: hydraEngine, - Logger: logger, - }))) + if cfg.AcmeEnabled { + mux.Handle(ctrlv1connect.NewAcmeServiceHandler(acme.New(acme.Config{ + PartitionDB: partitionDB, + DB: database, + HydraEngine: hydraEngine, + Logger: logger, + }))) + }
178-184: Reduce log noise; downgrade per-request auth header log to Debug.Prevents log spam in production.
Apply this diff:
- logger.Info("Adding auth headers to metald request", "procedure", req.Spec().Procedure) + logger.Debug("Adding auth headers to metald request", "procedure", req.Spec().Procedure)go/apps/ctrl/services/acme/providers/http_provider.go (4)
40-45: Use context with timeout for DB ops in Present.Avoid indefinite blocking.
Apply this diff:
- ctx := context.Background() + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel()
56-58: Avoid logging ACME tokens.Tokens are sensitive; remove from logs.
Apply this diff:
- p.logger.Error("failed to store challenge", "error", err, "domain", domain, "token", token) + p.logger.Error("failed to store challenge", "error", err, "domain", domain)
69-76: Use context with timeout for DB ops in CleanUp.Apply this diff:
- ctx := context.Background() + ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second) + defer cancel()
84-86: Avoid logging ACME tokens on cleanup errors.Apply this diff:
- p.logger.Warn("failed to clean up challenge", "error", err, "domain", domain, "token", token) + p.logger.Warn("failed to clean up challenge", "error", err, "domain", domain)go/apps/ctrl/services/acme/certificate_workflow.go (3)
69-72: Use stepCtx, not ctx.Context(), inside step closures.Ensures correct cancellation scoping.
Apply this diff:
- return db.Query.FindDomainByDomain(ctx.Context(), w.db.RO(), req.Domain) + return db.Query.FindDomainByDomain(stepCtx, w.db.RO(), req.Domain)
95-102: Remove dead/incorrect err check capturing outer scope.This block references the wrong err and never does what’s intended.
Apply this diff:
- // Regardless we first claim the challenge so that no-other job tries to do the same, this will just annoy acme ratelimits - if err != nil { - db.Query.UpdateAcmeChallengeStatus(ctx.Context(), w.db.RW(), db.UpdateAcmeChallengeStatusParams{ - DomainID: dom.ID, - Status: db.AcmeChallengesStatusFailed, - UpdatedAt: sql.NullInt64{Valid: true, Int64: time.Now().UnixMilli()}, - }) - w.logger.Error("failed to obtain certificate", "error", err) - return EncryptedCertificate{}, err - } + // Regardless we first claim the challenge so that no other job tries to do the same.
131-134: On ACME obtain/renew failure, update challenge status to FAILED using stepCtx.Apply this diff:
if err != nil { - w.logger.Error("failed to renew/issue certificate", "error", err) + _ = db.Query.UpdateAcmeChallengeStatus(stepCtx, w.db.RW(), db.UpdateAcmeChallengeStatusParams{ + DomainID: dom.ID, + Status: db.AcmeChallengesStatusFailed, + UpdatedAt: sql.NullInt64{Valid: true, Int64: time.Now().UnixMilli()}, + }) + w.logger.Error("failed to renew/issue certificate", "error", err) return EncryptedCertificate{}, err }go/cmd/deploy/control_plane.go (3)
231-234: Don’t log raw auth tokens.Redact secrets in internal logs.
Apply this diff:
- fault.Internal(fmt.Sprintf("Authentication failed with token: %s", c.opts.AuthToken)), + fault.Internal("Authentication failed with provided token (redacted)"),
187-193: Gate demo sleep behind an option.Avoid artificial delays outside demos.
Apply this diff if you add a DemoMode flag to DeployOptions:
- // INFO: This is for demo purposes only. - // Adding a small delay between deployment steps to make the progression - // visually observable during demos. This allows viewers to see each - // individual step (VM boot, rootfs loading, etc.) rather than having - // everything complete too quickly to follow. - time.Sleep(800 * time.Millisecond) + if c.opts.DemoMode { + time.Sleep(800 * time.Millisecond) + }
60-63: AddEnvironmentIDtoDeployOptionsand propagate from CLI flags
TheDeployOptionsstruct ingo/cmd/deploy/main.go(around line 83) lacks anEnvironmentIDfield—add it there, update the CLI flag parsing to populate it, then replace the hardcoded"env_prod"ingo/cmd/deploy/control_plane.go(lines 60–63) withc.opts.EnvironmentID.go/apps/ctrl/services/deployment/create_deployment.go (3)
41-46: Validate environment existence and ownership before insert.You trust
environment_idfrom the request without verifying it exists or belongs to the same workspace/project. Add a lookup and ownership checks to prevent cross-project/tenant writes.Apply:
@@ // Verify project belongs to the specified workspace if project.WorkspaceID != req.Msg.GetWorkspaceId() { return nil, connect.NewError(connect.CodeInvalidArgument, fmt.Errorf("project %s does not belong to workspace %s", req.Msg.GetProjectId(), req.Msg.GetWorkspaceId())) } + + // Validate environment exists and belongs to workspace & project + if req.Msg.GetEnvironmentId() == "" { + return nil, connect.NewError(connect.CodeInvalidArgument, + fmt.Errorf("environment_id is required")) + } + env, err := db.Query.FindEnvironmentById(ctx, s.db.RO(), req.Msg.GetEnvironmentId()) + if err != nil { + if db.IsNotFound(err) { + return nil, connect.NewError(connect.CodeNotFound, + fmt.Errorf("environment not found: %s", req.Msg.GetEnvironmentId())) + } + return nil, connect.NewError(connect.CodeInternal, err) + } + if env.WorkspaceID != req.Msg.GetWorkspaceId() || env.ProjectID != req.Msg.GetProjectId() { + return nil, connect.NewError(connect.CodeInvalidArgument, + fmt.Errorf("environment %s does not belong to workspace %s / project %s", + req.Msg.GetEnvironmentId(), req.Msg.GetWorkspaceId(), req.Msg.GetProjectId())) + }Also applies to: 63-67
49-55: Prefer using NullString.Valid for clarity in branch resolution.Functionally OK, but checking
project.DefaultBranch.Validis clearer and avoids treating a valid-but-empty branch the same as nil.- gitBranch := req.Msg.GetBranch() - if gitBranch == "" { - gitBranch = project.DefaultBranch.String - if gitBranch == "" { - gitBranch = "main" // fallback default - } - } + gitBranch := req.Msg.GetBranch() + if gitBranch == "" { + if project.DefaultBranch.Valid && project.DefaultBranch.String != "" { + gitBranch = project.DefaultBranch.String + } else { + gitBranch = "main" // fallback default + } + }Also applies to: 67-69
87-96: Honor SourceType / derive environment when environment_id omitted.
CreateDeploymentRequest.environment_idis documented as optional (defaulting based on branch), but the server treats it as required. Either enforce required in proto or implement default selection (e.g., map default branch to a “production” environment).I can add a helper to resolve environment by (workspace_id, project_id, branch) and fall back to a configured default. Want me to draft it?
go/proto/ctrl/v1/deployment.proto (2)
54-87: Remove legacy build/rootfs fields from Deployment.
rootfs_image_idandbuild_idremain, but the schema dropped rootfs/builds. Keep proto aligned to avoid confusion and dead fields.Apply:
message Deployment { @@ - // Associated hostnames for this deployment - repeated string hostnames = 13; - - // Build information - string rootfs_image_id = 14; - string build_id = 15; + // Associated hostnames for this deployment + repeated string hostnames = 13; }Optionally reserve the removed field numbers/names to prevent reuse:
reserved 14, 15; reserved "rootfs_image_id", "build_id";
89-94: Consider enum for DeploymentStep.status.DB uses a constrained set of step statuses; using
stringinvites typos and brittle clients.Introduce
enum DeploymentStepStatus { ... }mirroring DB values and switchstatusto that enum.go/pkg/db/querier_generated.go (3)
856-880: InsertDomain relies on a missing unique key.
ON DUPLICATE KEY UPDATEwon’t trigger without a unique/PK violation. AddUNIQUE(domain)(see schema comments) or change the conflict target to an existing key.Once the schema is fixed, no code changes are needed here.
1197-1202: Datetime vs BIGINT comparison in ListExecutableChallenges.
expires_atis BIGINT, but you compare it toDATE_ADD(NOW(), ...)(DATETIME). Always false or coerced incorrectly.Update the SQL (source file) and regenerate. If using milliseconds:
... AND dc.expires_at <= UNIX_TIMESTAMP(UTC_TIMESTAMP() + INTERVAL 30 DAY) * 1000If using seconds:
... AND dc.expires_at <= UNIX_TIMESTAMP(UTC_TIMESTAMP() + INTERVAL 30 DAY)Also consider
UTC_TIMESTAMP()for consistency.
116-123: Add supporting indexes for ACME lookups.Queries filter by
workspace_idanddomain_idwithout an index ondomain_id. See schema suggestion to addacme_challenges(domain_id).Also applies to: 1515-1536
go/apps/ctrl/services/deployment/deploy_workflow.go (6)
596-613: Fix response body leaks in loops and use context-aware HTTP requests.Deferring resp.Body.Close() inside loops can exhaust connections. Also, client.Get doesn't bind to stepCtx. Build requests with context, drain/close bodies per iteration.
- resp, err := client.Get(healthURL) + req, reqErr := http.NewRequestWithContext(stepCtx, http.MethodGet, healthURL, nil) + if reqErr != nil { + w.logger.Warn("failed to build health check request", "error", reqErr, "host_addr", hostAddr, "deployment_id", req.DeploymentID) + continue + } + resp, err := client.Do(req) if err != nil { w.logger.Warn("health check failed for host address", "error", err, "host_addr", hostAddr, "deployment_id", req.DeploymentID) continue } - defer resp.Body.Close() if resp.StatusCode == http.StatusOK { + io.Copy(io.Discard, resp.Body) + resp.Body.Close() w.logger.Info("container is healthy", "host_addr", hostAddr, "deployment_id", req.DeploymentID) return nil } - w.logger.Warn("health check returned non-200 status", "status", resp.StatusCode, "host_addr", hostAddr, "deployment_id", req.DeploymentID) + io.Copy(io.Discard, resp.Body) + resp.Body.Close() + w.logger.Warn("health check returned non-200 status", "status", resp.StatusCode, "host_addr", hostAddr, "deployment_id", req.DeploymentID)- resp, err := client.Get(openapiURL) + req, reqErr := http.NewRequestWithContext(stepCtx, http.MethodGet, openapiURL, nil) + if reqErr != nil { + w.logger.Warn("failed to build OpenAPI request", "error", reqErr, "host_addr", hostAddr, "deployment_id", req.DeploymentID) + continue + } + resp, err := client.Do(req) if err != nil { w.logger.Warn("OpenAPI scraping failed for host address", "error", err, "host_addr", hostAddr, "deployment_id", req.DeploymentID) continue } - defer resp.Body.Close() if resp.StatusCode != http.StatusOK { - w.logger.Warn("OpenAPI endpoint returned non-200 status", "status", resp.StatusCode, "host_addr", hostAddr, "deployment_id", req.DeploymentID) + io.Copy(io.Discard, resp.Body) + resp.Body.Close() + w.logger.Warn("OpenAPI endpoint returned non-200 status", "status", resp.StatusCode, "host_addr", hostAddr, "deployment_id", req.DeploymentID) continue } // Read the OpenAPI spec specBytes, err := io.ReadAll(resp.Body) + resp.Body.Close() if err != nil { w.logger.Warn("failed to read OpenAPI spec response", "error", err, "host_addr", hostAddr, "deployment_id", req.DeploymentID) continue }Also applies to: 653-676
105-118: Rename “version” → “deployment” in step names and messages.Terminology drift can confuse observability and operators.
- _, err = hydra.Step(ctx, "update-version-building", func(stepCtx context.Context) (*struct{}, error) { + _, err = hydra.Step(ctx, "update-deployment-building", func(stepCtx context.Context) (*struct{}, error) { @@ - return nil, fmt.Errorf("failed to update version status to building: %w", updateErr) + return nil, fmt.Errorf("failed to update deployment status to building: %w", updateErr)- Message: fmt.Sprintf("Creating VM for version: %s", req.DeploymentID), + Message: fmt.Sprintf("Creating VM for deployment: %s", req.DeploymentID),- _, err = hydra.Step(ctx, "update-version-deploying", func(stepCtx context.Context) (*struct{}, error) { + _, err = hydra.Step(ctx, "update-deployment-deploying", func(stepCtx context.Context) (*struct{}, error) { @@ - return nil, fmt.Errorf("failed to update version status to deploying: %w", deployingErr) + return nil, fmt.Errorf("failed to update deployment status to deploying: %w", deployingErr) @@ - w.logger.Error("failed to update version status to deploying", "error", err, "deployment_id", req.DeploymentID) + w.logger.Error("failed to update deployment status to deploying", "error", err, "deployment_id", req.DeploymentID)Also applies to: 219-226, 232-248
540-559: Standardize on “ready” (DB enum) instead of “active”, and rename the step accordingly.The DB status is DeploymentsStatusReady; returning/logging “active” is inconsistent.
- // Step 21: Update deployment status to active - _, err = hydra.Step(ctx, "update-deployment-active", func(stepCtx context.Context) (*DeploymentResult, error) { + // Step 21: Update deployment status to ready + _, err = hydra.Step(ctx, "update-deployment-ready", func(stepCtx context.Context) (*DeploymentResult, error) { @@ - w.logger.Info("updating deployment status to active", "deployment_id", req.DeploymentID, "completion_time", completionTime) + w.logger.Info("updating deployment status to ready", "deployment_id", req.DeploymentID, "completion_time", completionTime) @@ - w.logger.Error("failed to update deployment status to active", "error", activeErr, "deployment_id", req.DeploymentID) - return nil, fmt.Errorf("failed to update deployment status to active: %w", activeErr) + w.logger.Error("failed to update deployment status to ready", "error", activeErr, "deployment_id", req.DeploymentID) + return nil, fmt.Errorf("failed to update deployment status to ready: %w", activeErr) @@ - w.logger.Info("deployment complete", "deployment_id", req.DeploymentID, "status", "active") + w.logger.Info("deployment ready", "deployment_id", req.DeploymentID, "status", "ready") @@ - Status: "active", + Status: "ready",
682-685: Don’t fail the workflow on OpenAPI scrape errors (align with health-check behavior).Health check is non-fatal; scraping should be too. This lets deployments complete even if the app doesn’t expose OpenAPI.
- w.logger.Error("failed to scrape OpenAPI spec", "error", err, "deployment_id", req.DeploymentID) - return err + w.logger.Error("failed to scrape OpenAPI spec", "error", err, "deployment_id", req.DeploymentID) + // Non-fatal: continue without OpenAPI
356-385: Gateway config only targets req.Hostname; generated primaryHostname is ignored.Today, when req.Hostname is empty, you still create Domain entries (primary/local), but no GatewayConfig for them. Consider moving this step after domain assignment and configuring the first assigned hostname by default (or using a derived domain field rather than “hostname”).
Would you like me to propose a refactor that moves “create-gateway-config” after domain assignment and derives the hostname from assignedHostnames[0] with feature flag gating?
441-458: Sanitize and truncate the full hostname label to RFC-1123Diff:
- // Replace underscores with dashes for valid hostname format - cleanIdentifier := strings.ReplaceAll(identifier, "_", "-") - primaryHostname := fmt.Sprintf("%s-%s-%s.unkey.app", branch, cleanIdentifier, req.WorkspaceID) + raw := fmt.Sprintf("%s-%s-%s", branch, identifier, req.WorkspaceID) + label := sanitizeHostnameLabel(raw) + primaryHostname := fmt.Sprintf("%s.unkey.app", label)Add this helper alongside existing code:
// sanitizeHostnameLabel lowercases, replaces non-[a-z0-9] with '-', collapses repeats, // trims leading/trailing '-', and truncates to 63 chars. func sanitizeHostnameLabel(s string) string { s = strings.ToLower(s) var out []rune prevDash := false for _, r := range s { isAlnum := (r >= 'a' && r <= 'z') || (r >= '0' && r <= '9') if isAlnum { out = append(out, r) prevDash = false continue } if !prevDash { out = append(out, '-') prevDash = true } } res := strings.Trim(string(out), "-") if res == "" { res = "x" } if len(res) > 63 { res = res[:63] } return res }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (5)
go/gen/proto/ctrl/v1/ctrlv1connect/deployment.connect.gois excluded by!**/gen/**go/gen/proto/ctrl/v1/ctrlv1connect/version.connect.gois excluded by!**/gen/**go/gen/proto/ctrl/v1/deployment.pb.gois excluded by!**/*.pb.go,!**/gen/**go/gen/proto/metal/vmprovisioner/v1/wip.pb.gois excluded by!**/*.pb.go,!**/gen/**go/gen/proto/partition/v1/gateway.pb.gois excluded by!**/*.pb.go,!**/gen/**
📒 Files selected for processing (107)
apps/dashboard/lib/audit.ts(1 hunks)apps/dashboard/lib/trpc/routers/deploy/project/create.ts(1 hunks)apps/dashboard/lib/trpc/routers/deploy/project/list.ts(2 hunks)apps/dashboard/lib/trpc/routers/deployment/getById.ts(2 hunks)apps/dashboard/lib/trpc/routers/deployment/getOpenApiDiff.ts(2 hunks)apps/dashboard/lib/trpc/routers/deployment/list.ts(2 hunks)apps/dashboard/lib/trpc/routers/deployment/listByEnvironment.ts(2 hunks)apps/dashboard/lib/trpc/routers/deployment/listByProject.ts(2 hunks)deployment/docker-compose.yaml(2 hunks)go/apps/ctrl/config.go(2 hunks)go/apps/ctrl/run.go(3 hunks)go/apps/ctrl/services/acme/certificate_verification.go(2 hunks)go/apps/ctrl/services/acme/certificate_workflow.go(3 hunks)go/apps/ctrl/services/acme/providers/http_provider.go(2 hunks)go/apps/ctrl/services/deployment/create_deployment.go(4 hunks)go/apps/ctrl/services/deployment/deploy_workflow.go(14 hunks)go/apps/ctrl/services/deployment/get_deployment.go(1 hunks)go/apps/ctrl/services/deployment/get_version.go(0 hunks)go/apps/ctrl/services/deployment/service.go(2 hunks)go/apps/gw/services/auth/auth.go(2 hunks)go/apps/gw/services/routing/service.go(2 hunks)go/apps/gw/services/validation/validator.go(4 hunks)go/cmd/ctrl/main.go(2 hunks)go/cmd/deploy/control_plane.go(9 hunks)go/cmd/deploy/main.go(4 hunks)go/pkg/db/acme_challenge_find_by_token.sql_generated.go(1 hunks)go/pkg/db/acme_challenge_insert.sql_generated.go(1 hunks)go/pkg/db/acme_challenge_list_executable.sql_generated.go(3 hunks)go/pkg/db/acme_challenge_try_claiming.sql_generated.go(1 hunks)go/pkg/db/acme_challenge_update_expires_at.sql_generated.go(1 hunks)go/pkg/db/acme_challenge_update_pending.sql_generated.go(1 hunks)go/pkg/db/acme_challenge_update_status.sql_generated.go(1 hunks)go/pkg/db/acme_user_update_registered.sql_generated.go(0 hunks)go/pkg/db/build_find_by_id.sql_generated.go(0 hunks)go/pkg/db/build_find_latest_by_deployment_id.sql_generated.go(0 hunks)go/pkg/db/build_insert.sql_generated.go(0 hunks)go/pkg/db/build_update_failed.sql_generated.go(0 hunks)go/pkg/db/build_update_status.sql_generated.go(0 hunks)go/pkg/db/build_update_succeeded.sql_generated.go(0 hunks)go/pkg/db/bulk_acme_challenge_insert.sql.go(3 hunks)go/pkg/db/bulk_build_insert.sql.go(0 hunks)go/pkg/db/bulk_deployment_insert.sql.go(3 hunks)go/pkg/db/bulk_deployment_step_insert.sql.go(2 hunks)go/pkg/db/bulk_domain_insert.sql.go(3 hunks)go/pkg/db/bulk_project_insert.sql.go(2 hunks)go/pkg/db/bulk_route_insert.sql.go(0 hunks)go/pkg/db/deployment_find_by_id.sql_generated.go(3 hunks)go/pkg/db/deployment_insert.sql_generated.go(4 hunks)go/pkg/db/deployment_step_find_by_deployment_id.sql_generated.go(1 hunks)go/pkg/db/deployment_step_insert.sql_generated.go(1 hunks)go/pkg/db/domain_challenge_find_by_token.sql_generated.go(0 hunks)go/pkg/db/domain_challenge_insert.sql_generated.go(0 hunks)go/pkg/db/domain_challenge_try_claiming.sql_generated.go(0 hunks)go/pkg/db/domain_challenge_update_expires_at.sql_generated.go(0 hunks)go/pkg/db/domain_challenge_update_pending.sql_generated.go(0 hunks)go/pkg/db/domain_challenge_update_status.sql_generated.go(0 hunks)go/pkg/db/domain_find_by_deployment_id.sql_generated.go(1 hunks)go/pkg/db/domain_find_by_domain.sql_generated.go(1 hunks)go/pkg/db/domain_insert.sql_generated.go(2 hunks)go/pkg/db/models_generated.go(6 hunks)go/pkg/db/project_find_by_id.sql_generated.go(1 hunks)go/pkg/db/project_find_by_workspace_slug.sql_generated.go(1 hunks)go/pkg/db/project_insert.sql_generated.go(2 hunks)go/pkg/db/querier_bulk_generated.go(1 hunks)go/pkg/db/querier_generated.go(10 hunks)go/pkg/db/queries/acme_challenge_find_by_token.sql(1 hunks)go/pkg/db/queries/acme_challenge_insert.sql(2 hunks)go/pkg/db/queries/acme_challenge_list_executable.sql(1 hunks)go/pkg/db/queries/acme_challenge_try_claiming.sql(1 hunks)go/pkg/db/queries/acme_challenge_update_expires_at.sql(1 hunks)go/pkg/db/queries/acme_challenge_update_pending.sql(1 hunks)go/pkg/db/queries/acme_challenge_update_status.sql(1 hunks)go/pkg/db/queries/build_find_by_id.sql(0 hunks)go/pkg/db/queries/build_find_latest_by_deployment_id.sql(0 hunks)go/pkg/db/queries/build_insert.sql(0 hunks)go/pkg/db/queries/build_update_failed.sql(0 hunks)go/pkg/db/queries/build_update_status.sql(0 hunks)go/pkg/db/queries/build_update_succeeded.sql(0 hunks)go/pkg/db/queries/deployment_find_by_id.sql(1 hunks)go/pkg/db/queries/deployment_insert.sql(2 hunks)go/pkg/db/queries/deployment_step_find_by_deployment_id.sql(1 hunks)go/pkg/db/queries/deployment_step_insert.sql(1 hunks)go/pkg/db/queries/domain_challenge_find_by_token.sql(0 hunks)go/pkg/db/queries/domain_challenge_update_expires_at.sql(0 hunks)go/pkg/db/queries/domain_challenge_update_status.sql(0 hunks)go/pkg/db/queries/domain_find_by_deployment_id.sql(1 hunks)go/pkg/db/queries/domain_insert.sql(1 hunks)go/pkg/db/queries/project_find_by_id.sql(1 hunks)go/pkg/db/queries/project_find_by_workspace_slug.sql(1 hunks)go/pkg/db/queries/project_insert.sql(1 hunks)go/pkg/db/queries/route_find_by_deployment_id.sql(0 hunks)go/pkg/db/queries/route_insert.sql(0 hunks)go/pkg/db/route_find_by_deployment_id.sql_generated.go(0 hunks)go/pkg/db/route_insert.sql_generated.go(0 hunks)go/pkg/db/schema.sql(4 hunks)go/pkg/partition/db/models_generated.go(2 hunks)go/pkg/partition/db/querier_generated.go(2 hunks)go/pkg/partition/db/queries/vm_upsert.sql(1 hunks)go/pkg/partition/db/schema.sql(2 hunks)go/pkg/partition/db/vm_find_by_id.sql_generated.go(1 hunks)go/pkg/partition/db/vm_upsert.sql_generated.go(1 hunks)go/pkg/uid/uid.go(1 hunks)go/proto/ctrl/v1/deployment.proto(5 hunks)go/proto/metal/vmprovisioner/v1/wip.proto(1 hunks)go/proto/partition/v1/gateway.proto(1 hunks)internal/db/drizzle/0000_woozy_chimera.sql(4 hunks)internal/db/drizzle/meta/0000_snapshot.json(14 hunks)
⛔ Files not processed due to max files limit (15)
- internal/db/drizzle/meta/_journal.json
- internal/db/src/schema/acme_challenges.ts
- internal/db/src/schema/builds.ts
- internal/db/src/schema/deployment_steps.ts
- internal/db/src/schema/deployments.ts
- internal/db/src/schema/domains.ts
- internal/db/src/schema/environment_variables.ts
- internal/db/src/schema/environments.ts
- internal/db/src/schema/index.ts
- internal/db/src/schema/partitions.ts
- internal/db/src/schema/projects.ts
- internal/db/src/schema/rootfs_images.ts
- internal/db/src/schema/workspaces.ts
- internal/id/src/generate.ts
- internal/schema/src/auditlog.ts
💤 Files with no reviewable changes (29)
- go/pkg/db/queries/route_find_by_deployment_id.sql
- go/pkg/db/bulk_route_insert.sql.go
- go/pkg/db/domain_challenge_update_status.sql_generated.go
- go/pkg/db/domain_challenge_insert.sql_generated.go
- go/pkg/db/queries/domain_challenge_update_expires_at.sql
- go/pkg/db/queries/build_update_succeeded.sql
- go/pkg/db/queries/build_update_failed.sql
- go/pkg/db/queries/build_insert.sql
- go/pkg/db/queries/domain_challenge_update_status.sql
- go/pkg/db/queries/domain_challenge_find_by_token.sql
- go/pkg/db/queries/route_insert.sql
- go/pkg/db/queries/build_update_status.sql
- go/pkg/db/build_find_latest_by_deployment_id.sql_generated.go
- go/pkg/db/acme_user_update_registered.sql_generated.go
- go/pkg/db/route_find_by_deployment_id.sql_generated.go
- go/pkg/db/queries/build_find_by_id.sql
- go/pkg/db/build_update_succeeded.sql_generated.go
- go/pkg/db/route_insert.sql_generated.go
- go/pkg/db/build_update_status.sql_generated.go
- go/pkg/db/domain_challenge_find_by_token.sql_generated.go
- go/pkg/db/build_find_by_id.sql_generated.go
- go/pkg/db/build_update_failed.sql_generated.go
- go/pkg/db/build_insert.sql_generated.go
- go/apps/ctrl/services/deployment/get_version.go
- go/pkg/db/bulk_build_insert.sql.go
- go/pkg/db/domain_challenge_try_claiming.sql_generated.go
- go/pkg/db/domain_challenge_update_pending.sql_generated.go
- go/pkg/db/queries/build_find_latest_by_deployment_id.sql
- go/pkg/db/domain_challenge_update_expires_at.sql_generated.go
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-07-25T19:09:43.284Z
Learnt from: mcstepp
PR: unkeyed/unkey#3662
File: apps/dashboard/lib/trpc/routers/deployment/list.ts:11-11
Timestamp: 2025-07-25T19:09:43.284Z
Learning: In apps/dashboard/lib/trpc/routers/deployment/list.ts, the listDeployments procedure intentionally queries the versions table rather than a deployments table. The user mcstepp indicated that renaming the table would require a database migration, which was deferred for the current PR focused on UI features.
Applied to files:
apps/dashboard/lib/trpc/routers/deployment/getById.tsapps/dashboard/lib/trpc/routers/deployment/listByEnvironment.tsapps/dashboard/lib/trpc/routers/deployment/list.tsapps/dashboard/lib/trpc/routers/deployment/listByProject.tsapps/dashboard/lib/trpc/routers/deploy/project/create.tsapps/dashboard/lib/trpc/routers/deploy/project/list.tsapps/dashboard/lib/trpc/routers/deployment/getOpenApiDiff.ts
📚 Learning: 2025-07-17T14:24:20.403Z
Learnt from: Flo4604
PR: unkeyed/unkey#3631
File: go/pkg/db/bulk_keyring_insert.sql.go:23-25
Timestamp: 2025-07-17T14:24:20.403Z
Learning: In go/pkg/db/bulk_keyring_insert.sql.go and similar bulk insert generated files, hardcoded zero values for fields like size_approx and size_last_updated_at are intentional and reflect the original SQL query structure, not missing parameters.
Applied to files:
go/pkg/db/bulk_project_insert.sql.gogo/pkg/db/bulk_domain_insert.sql.go
📚 Learning: 2025-07-25T19:11:00.208Z
Learnt from: mcstepp
PR: unkeyed/unkey#3662
File: apps/dashboard/lib/trpc/routers/deployment/getOpenApiDiff.ts:110-147
Timestamp: 2025-07-25T19:11:00.208Z
Learning: In apps/dashboard/lib/trpc/routers/deployment/getOpenApiDiff.ts, the user mcstepp prefers to keep mock data fallbacks in POC/demonstration code for simplicity, even if it wouldn't be production-ready. This aligns with the PR being work-in-progress for demonstration purposes.
Applied to files:
apps/dashboard/lib/trpc/routers/deployment/list.tsapps/dashboard/lib/trpc/routers/deployment/listByProject.tsapps/dashboard/lib/trpc/routers/deployment/getOpenApiDiff.ts
📚 Learning: 2025-04-08T09:34:24.576Z
Learnt from: ogzhanolguncu
PR: unkeyed/unkey#2872
File: apps/dashboard/lib/trpc/routers/ratelimit/createNamespace.ts:36-39
Timestamp: 2025-04-08T09:34:24.576Z
Learning: When querying or updating namespaces in the Unkey dashboard, always scope the operations to the current workspace using `eq(table.workspaceId, ctx.workspace.id)` to prevent cross-workspace access.
Applied to files:
apps/dashboard/lib/trpc/routers/deployment/list.ts
🪛 Gitleaks (8.27.2)
deployment/docker-compose.yaml
244-244: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
🪛 Buf (1.55.1)
go/proto/metal/vmprovisioner/v1/wip.proto
3-3: Files with package "metal.vmprovisioner.v1" must be within a directory "metal/vmprovisioner/v1" relative to root but were in directory "go/proto/metal/vmprovisioner/v1".
(PACKAGE_DIRECTORY_MATCH)
35-35: Enum zero value name "NETWORK_SIZE_SMALL" should be suffixed with "_UNSPECIFIED".
(ENUM_ZERO_VALUE_SUFFIX)
🪛 golangci-lint (2.2.2)
go/cmd/deploy/main.go
290-290: missing cases in switch of type ctrlv1.DeploymentStatus: ctrlv1.DeploymentStatus_DEPLOYMENT_STATUS_UNSPECIFIED, ctrlv1.DeploymentStatus_DEPLOYMENT_STATUS_PENDING, ctrlv1.DeploymentStatus_DEPLOYMENT_STATUS_BUILDING, ctrlv1.DeploymentStatus_DEPLOYMENT_STATUS_DEPLOYING, ctrlv1.DeploymentStatus_DEPLOYMENT_STATUS_NETWORK
(exhaustive)
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 14
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (11)
internal/db/src/schema/deployments.ts (1)
22-26: Drop gitCommitAuthorEmail from internal/db/src/schema/deployments.ts
Columngit_commit_author_emailis missing in all DB schemas; remove this field to keep the TS schema in sync with the database.- gitCommitAuthorEmail: varchar("git_commit_author_email", { length: 256 }),go/apps/ctrl/services/deployment/create_deployment_simple_test.go (5)
66-71: Fix subtest loop variable capture with t.Parallel().Range variable tt is reused; capture it per-iteration to avoid flaky tests.
Apply:
for _, tt := range tests { - t.Run(tt.name, func(t *testing.T) { + tt := tt + t.Run(tt.name, func(t *testing.T) { t.Parallel()(Do this in all three loops.)
Also applies to: 244-247, 424-427
165-165: Rename test to reflect Deployment terminology.Avoid legacy “Version” naming.
-func TestCreateVersionTimestampValidation_InvalidSecondsFormat(t *testing.T) { +func TestCreateDeploymentTimestampValidation_InvalidSecondsFormat(t *testing.T) {
184-184: Rename test to reflect Deployment terminology.-func TestCreateVersionTimestampValidation_ValidMillisecondsFormat(t *testing.T) { +func TestCreateDeploymentTimestampValidation_ValidMillisecondsFormat(t *testing.T) {
261-261: Rename test to reflect Deployment terminology.-func TestCreateVersionFieldMapping(t *testing.T) { +func TestCreateDeploymentFieldMapping(t *testing.T) {
213-217: Correct the timestamp comment (1_000_000_000_000 ms).1,000,000,000,000 ms is 2001-09-09 01:46:40 UTC, not Jan 1, 2001.
- timestamp: 1_000_000_000_000, // Exactly Jan 1, 2001 in milliseconds + timestamp: 1_000_000_000_000, // 2001-09-09 01:46:40 UTC (ms)go/proto/ctrl/v1/deployment.proto (2)
62-104: Reserve removed field number 19 (email) for wire-compat.Prevents accidental reuse.
message Deployment { + // Field 19 (email) intentionally removed; keep number reserved. + reserved 19; string id = 1; string workspace_id = 2; string project_id = 3; string environment_id = 4;
105-110: Consider enum for DeploymentStep.status.Stringly-typed status is brittle; introduce a DeploymentStepStatus enum.
-message DeploymentStep { - string status = 1; +enum DeploymentStepStatus { + DEPLOYMENT_STEP_STATUS_UNSPECIFIED = 0; + DEPLOYMENT_STEP_STATUS_SCHEDULED = 1; + DEPLOYMENT_STEP_STATUS_RUNNING = 2; + DEPLOYMENT_STEP_STATUS_DONE = 3; + DEPLOYMENT_STEP_STATUS_FAILED = 4; +} +message DeploymentStep { + DeploymentStepStatus status = 1; string message = 2; string error_message = 3; int64 created_at = 4; }go/apps/ctrl/services/deployment/create_deployment.go (3)
40-56: Ensure environment exists and belongs to project/workspace.Prevent cross-tenant/env misuse.
// Verify project belongs to the specified workspace if project.WorkspaceID != req.Msg.GetWorkspaceId() { return nil, connect.NewError(connect.CodeInvalidArgument, fmt.Errorf("project %s does not belong to workspace %s", req.Msg.GetProjectId(), req.Msg.GetWorkspaceId())) } + + // Validate environment belongs to project/workspace + env, err := db.Query.FindEnvironmentById(ctx, s.db.RO(), req.Msg.GetEnvironmentId()) + if err != nil { + if db.IsNotFound(err) { + return nil, connect.NewError(connect.CodeInvalidArgument, + fmt.Errorf("environment not found: %s", req.Msg.GetEnvironmentId())) + } + return nil, connect.NewError(connect.CodeInternal, err) + } + if env.ProjectID != req.Msg.GetProjectId() || env.WorkspaceID != req.Msg.GetWorkspaceId() { + return nil, connect.NewError(connect.CodeInvalidArgument, + fmt.Errorf("environment %s does not belong to project %s / workspace %s", + req.Msg.GetEnvironmentId(), req.Msg.GetProjectId(), req.Msg.GetWorkspaceId())) + }
66-82: Source-type specific validation.Require inputs based on SourceType.
// Validate git commit timestamp if provided (must be Unix epoch milliseconds) if req.Msg.GetGitCommitTimestamp() != 0 { @@ } + + // Validate source-specific fields + switch req.Msg.GetSourceType() { + case ctrlv1.SourceType_SOURCE_TYPE_GIT: + if strings.TrimSpace(req.Msg.GetGitCommitSha()) == "" { + return nil, connect.NewError(connect.CodeInvalidArgument, fmt.Errorf("git_commit_sha is required for GIT source_type")) + } + case ctrlv1.SourceType_SOURCE_TYPE_CLI_UPLOAD: + if strings.TrimSpace(req.Msg.GetDockerImageTag()) == "" { + return nil, connect.NewError(connect.CodeInvalidArgument, fmt.Errorf("docker_image_tag is required for CLI_UPLOAD source_type")) + } + default: + return nil, connect.NewError(connect.CodeInvalidArgument, fmt.Errorf("source_type is required")) + }
96-113: Validate non-empty EnvironmentID
environment_idis defined as NOT NULL in the database (schema.sql:323, 0000_woozy_chimera.sql:323), so ensurereq.Msg.GetEnvironmentId()isn’t empty—add upstream validation or defaulting logic before insertion.
♻️ Duplicate comments (15)
go/pkg/db/querier_generated.go (2)
871-895: ON DUPLICATE KEY for domains requires a unique key on domain.
InsertDomainupserts by a unique violation, butschema.sqlhas noUNIQUE(domain). Add it (see schema comment) or the upsert won’t trigger except on ID.Also applies to: 903-905
172-186: Null-equality bug: FindDomainsByDeploymentId won’t match NULLs.Signature uses
sql.NullStringbut SQL isWHERE deployment_id = ?. Update the source SQL to usesqlc.nargNULL semantics and regenerate.-- name: FindDomainsByDeploymentId :many SELECT id, workspace_id, project_id, domain, deployment_id, created_at, updated_at FROM domains WHERE ( (sqlc.narg('deployment_id') IS NULL AND deployment_id IS NULL) OR deployment_id = sqlc.narg('deployment_id') ) ORDER BY created_at ASC;go/pkg/db/schema.sql (4)
292-303: Environment slug uniqueness should include project_id.Allow same slug across projects in a workspace.
-CONSTRAINT `environments_workspace_id_slug_idx` UNIQUE(`workspace_id`,`slug`) +CONSTRAINT `environments_workspace_id_project_id_slug_idx` UNIQUE(`workspace_id`,`project_id`,`slug`)
319-337: Index deployments by environment_id.Common filter; add index.
CREATE INDEX `workspace_idx` ON `deployments` (`workspace_id`); CREATE INDEX `project_idx` ON `deployments` (`project_id`); +CREATE INDEX `environment_idx` ON `deployments` (`environment_id`); CREATE INDEX `status_idx` ON `deployments` (`status`);Also applies to: 404-406
359-369: Make domain unique and index deployment_id.Required for upsert and lookups.
CREATE TABLE `domains` ( @@ - CONSTRAINT `domains_id` PRIMARY KEY(`id`) + CONSTRAINT `domains_id` PRIMARY KEY(`id`), + CONSTRAINT `domains_domain_unique` UNIQUE(`domain`) ); @@ CREATE INDEX `workspace_idx` ON `domains` (`workspace_id`); CREATE INDEX `project_idx` ON `domains` (`project_id`); +CREATE INDEX `deployment_idx` ON `domains` (`deployment_id`);Also applies to: 408-409
371-383: Add index on acme_challenges.domain_id.Joins/filters use it; reduce scans.
CREATE TABLE `acme_challenges` ( @@ CONSTRAINT `acme_challenges_id` PRIMARY KEY(`id`) ); @@ CREATE INDEX `workspace_idx` ON `acme_challenges` (`workspace_id`); +CREATE INDEX `domain_id_idx` ON `acme_challenges` (`domain_id`);Also applies to: 410-410
go/pkg/db/models_generated.go (1)
199-201: Align on “ready” vs “active” status naming.DB enum exposes DeploymentsStatusReady = "ready". Ensure all non-DB code paths use “ready”, not “active”.
go/apps/ctrl/services/deployment/get_deployment.go (5)
3-10: Import fmt for wrapped/connect errors.Needed for the validations/error wrapping below.
import ( "context" "database/sql" + "fmt" "connectrpc.com/connect" ctrlv1 "github.com/unkeyed/unkey/go/gen/proto/ctrl/v1" "github.com/unkeyed/unkey/go/pkg/db" )
12-20: Validate request and distinguish NotFound vs Internal.Return InvalidArgument on empty ID and map DB errors correctly.
func (s *Service) GetDeployment( ctx context.Context, req *connect.Request[ctrlv1.GetDeploymentRequest], ) (*connect.Response[ctrlv1.GetDeploymentResponse], error) { - // Query deployment from database - deployment, err := db.Query.FindDeploymentById(ctx, s.db.RO(), req.Msg.GetDeploymentId()) - if err != nil { - return nil, connect.NewError(connect.CodeNotFound, err) - } + // Validate + if req.Msg.GetDeploymentId() == "" { + return nil, connect.NewError(connect.CodeInvalidArgument, fmt.Errorf("deployment_id is required")) + } + // Query deployment from database + deployment, err := db.Query.FindDeploymentById(ctx, s.db.RO(), req.Msg.GetDeploymentId()) + if err != nil { + if db.IsNotFound(err) { + return nil, connect.NewError(connect.CodeNotFound, err) + } + s.logger.Error("failed to load deployment", "error", err, "deployment_id", req.Msg.GetDeploymentId()) + return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to load deployment")) + }
42-66: Populate Deployment.error_message from DB.Clients expect this when status is FAILED.
if deployment.GitBranch.Valid { protoDeployment.GitBranch = deployment.GitBranch.String } + if deployment.ErrorMessage.Valid { + protoDeployment.ErrorMessage = deployment.ErrorMessage.String + } if deployment.GitCommitMessage.Valid { protoDeployment.GitCommitMessage = deployment.GitCommitMessage.String }
74-81: Include step error messages in response.Keeps proto in sync with DB schema/UX.
for i, step := range deploymentSteps { protoSteps[i] = &ctrlv1.DeploymentStep{ - Status: string(step.Status), - CreatedAt: step.CreatedAt, - Message: step.Message, + Status: string(step.Status), + CreatedAt: step.CreatedAt, + Message: step.Message, + ErrorMessage: step.ErrorMessage, } }
85-87: Use the loaded deployment.ID when fetching domains.Avoid relying on the request again.
- routes, err := db.Query.FindDomainsByDeploymentId(ctx, s.db.RO(), sql.NullString{Valid: true, String: req.Msg.GetDeploymentId()}) + routes, err := db.Query.FindDomainsByDeploymentId(ctx, s.db.RO(), sql.NullString{Valid: true, String: deployment.ID})go/proto/ctrl/v1/deployment.proto (1)
7-15: DB/proto status mismatch: NETWORK requires DB enum support or removal.Ensure DB enum includes 'network' or drop it here and renumber usages.
If keeping NETWORK, add it to DB enums/migrations; otherwise apply:
enum DeploymentStatus { DEPLOYMENT_STATUS_UNSPECIFIED = 0; DEPLOYMENT_STATUS_PENDING = 1; DEPLOYMENT_STATUS_BUILDING = 2; DEPLOYMENT_STATUS_DEPLOYING = 3; - DEPLOYMENT_STATUS_NETWORK = 4; - DEPLOYMENT_STATUS_READY = 5; - DEPLOYMENT_STATUS_FAILED = 6; + DEPLOYMENT_STATUS_READY = 4; + DEPLOYMENT_STATUS_FAILED = 5; }go/apps/ctrl/services/deployment/create_deployment.go (2)
85-86: Time units consistency audit (ms vs NOW()).Ensure related queries use ms epoch (UNIX_TIMESTAMP()*1000) to match UnixMilli writes.
#!/bin/bash # Find NOW()/DATE_ADD usages that touch BIGINT timestamp columns rg -nP 'NOW\(|DATE_ADD\(' --type=sql go/pkg/db internal/db -S
119-125: Log field key nit: use environment_id.Consistent with schema/other logs.
s.logger.Info("starting deployment workflow for deployment", @@ - "environment", req.Msg.GetEnvironmentId(), + "environment_id", req.Msg.GetEnvironmentId(),
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (1)
go/gen/proto/ctrl/v1/deployment.pb.gois excluded by!**/*.pb.go,!**/gen/**
📒 Files selected for processing (13)
go/apps/ctrl/services/deployment/create_deployment.go(5 hunks)go/apps/ctrl/services/deployment/create_deployment_simple_test.go(11 hunks)go/apps/ctrl/services/deployment/get_deployment.go(1 hunks)go/pkg/db/bulk_deployment_insert.sql.go(3 hunks)go/pkg/db/deployment_find_by_id.sql_generated.go(4 hunks)go/pkg/db/deployment_insert.sql_generated.go(5 hunks)go/pkg/db/models_generated.go(7 hunks)go/pkg/db/querier_generated.go(11 hunks)go/pkg/db/queries/deployment_find_by_id.sql(1 hunks)go/pkg/db/queries/deployment_insert.sql(2 hunks)go/pkg/db/schema.sql(4 hunks)go/proto/ctrl/v1/deployment.proto(6 hunks)internal/db/src/schema/deployments.ts(3 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-07-15T14:59:30.212Z
Learnt from: chronark
PR: unkeyed/unkey#3560
File: go/deploy/metald/internal/database/repository.go:0-0
Timestamp: 2025-07-15T14:59:30.212Z
Learning: go/deploy/metald cannot currently import helpers from go/pkg/db because it is not yet part of the main Go module; avoid suggesting such imports until the modules are unified.
Applied to files:
go/apps/ctrl/services/deployment/get_deployment.go
📚 Learning: 2025-07-17T14:24:20.403Z
Learnt from: Flo4604
PR: unkeyed/unkey#3631
File: go/pkg/db/bulk_keyring_insert.sql.go:23-25
Timestamp: 2025-07-17T14:24:20.403Z
Learning: In go/pkg/db/bulk_keyring_insert.sql.go and similar bulk insert generated files, hardcoded zero values for fields like size_approx and size_last_updated_at are intentional and reflect the original SQL query structure, not missing parameters.
Applied to files:
go/pkg/db/bulk_deployment_insert.sql.go
📚 Learning: 2025-07-25T19:09:43.284Z
Learnt from: mcstepp
PR: unkeyed/unkey#3662
File: apps/dashboard/lib/trpc/routers/deployment/list.ts:11-11
Timestamp: 2025-07-25T19:09:43.284Z
Learning: In apps/dashboard/lib/trpc/routers/deployment/list.ts, the listDeployments procedure intentionally queries the versions table rather than a deployments table. The user mcstepp indicated that renaming the table would require a database migration, which was deferred for the current PR focused on UI features.
Applied to files:
internal/db/src/schema/deployments.ts
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (10)
internal/db/src/schema/deployments.ts (1)
54-69: Relations LGTM.Environment and steps relations look correct and consistent with schema.
go/pkg/db/deployment_insert.sql_generated.go (1)
110-129: Parameter order and types LGTM.
RuntimeConfigasjson.RawMessageand timestamp assql.NullInt64align with the SQL.go/pkg/db/querier_generated.go (1)
116-121: ACME methods LGTM.New ACME CRUD in the Querier matches table columns and intended workflow.
Also applies to: 1530-1551
go/pkg/db/schema.sql (1)
333-334: Status enum includes 'network' — good.Matches control-plane expectations.
go/pkg/db/models_generated.go (3)
586-591: Identity.Environment vs EnvironmentID: confirm intended semantics.Deployment uses EnvironmentID (string ID), but Identity still stores Environment (string). If this is meant to be slug vs ID, OK; otherwise consider migrating to environment_id for consistency and FK integrity.
3-3: sqlc version pin: consistency verified repository-wide
All generated Go files are pinned to sqlc v1.27.0.
242-244: No remaining references to removed “generated” DomainsType
Ripgrep across all Go files found no instances ofDomainsTypeGeneratedor the literal"generated".go/pkg/db/deployment_find_by_id.sql_generated.go (1)
12-21: LGTM: query/scan align with struct changes.environment_id and runtime_config are correctly selected and scanned into EnvironmentID and RuntimeConfig.
Also applies to: 58-66
go/pkg/db/queries/deployment_find_by_id.sql (1)
2-10: LGTM: columns match the generated scanner.Switch to environment_id/runtime_config is consistent.
go/apps/ctrl/services/deployment/create_deployment.go (1)
83-113: Trim lengths align with schema
The 10 240-char limit for gitCommitMessage into a text column and the 256/256/512 limits for authorName/username/avatarUrl into varchar(256)/varchar(256)/varchar(512) match the DB schema.
go/apps/ctrl/services/deployment/create_deployment_simple_test.go
Outdated
Show resolved
Hide resolved
Graphite Automations"Notify author when CI fails" took an action on this PR • (08/29/25)1 teammate was notified to this PR based on Andreas Thomas's automation. |
|
Thank you for following the naming conventions for pull request titles! 🙏 |
This fixes our (partially claude halluncinated) tables and protos to what we need for the demo
Summary by CodeRabbit