fix: use a new k8s_namespace column and use label based lookups#4436
fix: use a new k8s_namespace column and use label based lookups#4436
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
1 Skipped Deployment
|
|
📝 WalkthroughWalkthroughThis change introduces Kubernetes namespace abstraction for workspaces by adding a Changes
Sequence DiagramsequenceDiagram
participant Client
participant API as API Layer
participant DB as Database
participant Ctrl as Control Workflow
participant Krane as Krane (K8s)
Client->>API: CreateWorkspace
API->>DB: Insert workspace (k8s_namespace = null)
DB-->>API: workspace created
Client->>Ctrl: CreateProject (first for workspace)
Ctrl->>DB: Check workspace.k8s_namespace
DB-->>Ctrl: k8s_namespace is null
Ctrl->>Ctrl: Generate k8s_namespace (Nano ID)
Ctrl->>DB: UpdateWorkspaceK8sNamespace
DB-->>Ctrl: namespace initialized
Ctrl->>Ctrl: Generate project ID
Ctrl->>DB: InsertProject (with slug, repo, timestamps)
DB-->>Ctrl: project created
Ctrl->>DB: InsertEnvironments (with timestamps)
DB-->>Ctrl: environments created
loop For each environment
Ctrl->>Krane: CreateGateway (namespace=k8s_namespace, replicas)
Krane->>Krane: Create NS, Deployment (GenerateName), Service (labels)
Krane-->>Ctrl: gateway created
end
Ctrl-->>Client: project + environments + gateways
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Tip 📝 Customizable high-level summaries are now available in beta!You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.
Example instruction:
Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
This stack of pull requests is managed by Graphite. Learn more about stacking. |
…_use_label_based_lookups
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
go/apps/krane/backend/kubernetes/gateway_get.go (1)
108-116: Fix Pod phase to status mapping.Two issues in the switch statement:
PodFailedmaps toGATEWAY_STATUS_TERMINATING, but "terminating" implies graceful shutdown. A failed pod should map to a failure/error status orUNSPECIFIED.PodSucceededhas an empty handler with an incomplete comment, leavingpodStatusas zero value.case corev1.PodFailed: - podStatus = kranev1.GatewayStatus_GATEWAY_STATUS_TERMINATING + podStatus = kranev1.GatewayStatus_GATEWAY_STATUS_UNSPECIFIED // or a dedicated FAILED status if available case corev1.PodSucceeded: - // Handling to handle at this point + podStatus = kranev1.GatewayStatus_GATEWAY_STATUS_UNSPECIFIED // Succeeded pods have completed their workgo/apps/krane/backend/kubernetes/deployment_get.go (1)
108-109: Clarify or fix the incomplete comment.The comment "Handling to handle at this point" appears to be a typo or placeholder. If
PodSucceededintentionally falls through to use the zero value (DEPLOYMENT_STATUS_UNSPECIFIED), consider making this explicit.case corev1.PodSucceeded: - // Handling to handle at this point + podStatus = kranev1.DeploymentStatus_DEPLOYMENT_STATUS_UNSPECIFIED
🧹 Nitpick comments (7)
go/apps/krane/backend/kubernetes/krane.go (1)
1-3: Consider adding documentation.This unexported constant lacks context. Please add a comment explaining its purpose and usage, especially since it appears to be related to Kubernetes namespace or resource identification.
package kubernetes +// krane is the constant identifier used for Kubernetes resources managed by Krane. const krane = "krane"go/apps/krane/backend/kubernetes/gateway_delete.go (1)
78-83: Consider adjusting logged counts for accuracy.The
services_deletedanddeployments_deletedcounts reflect the number of resources listed, not necessarily the number deleted. If a resource is deleted between list and delete (resulting in a 404 that's silently ignored), the count would be slightly misleading. This is a minor observability nit.If precise counts are desired, track successful deletions:
+ servicesDeleted := 0 for _, service := range serviceList.Items { k.logger.Debug("deleting service", "name", service.Name, "gateway_id", gatewayID, ) err = k.clientset.CoreV1().Services(namespace).Delete(ctx, service.Name, deleteOptions) if err != nil && !apierrors.IsNotFound(err) { return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to delete service %s: %w", service.Name, err)) } + if err == nil { + servicesDeleted++ + } }go/apps/krane/backend/kubernetes/gateway_create.go (1)
179-181: Consider returning the service name in the response.The response only returns the status. The caller will need to call
GetGatewayto discover the generated service name for constructing the DNS address. Returning the service name directly would save an extra API call.return connect.NewResponse(&kranev1.CreateGatewayResponse{ - Status: kranev1.GatewayStatus_GATEWAY_STATUS_PENDING, + Status: kranev1.GatewayStatus_GATEWAY_STATUS_PENDING, + Address: fmt.Sprintf("%s.%s.svc.cluster.local", service.Name, service.Namespace), }), nilNote: This would require updating the proto definition to include an
addressfield inCreateGatewayResponse.go/apps/krane/backend/kubernetes/gateway_get.go (1)
47-52: Consider logging a warning when multiple deployments are found.The code assumes a single deployment per gateway ID, taking the first result silently. If multiple deployments exist (indicating a data inconsistency), this should be logged for debugging.
if len(deployments.Items) == 0 { return nil, connect.NewError(connect.CodeNotFound, fmt.Errorf("gateway not found: %s", gatewayID)) } +if len(deployments.Items) > 1 { + k.logger.Warn("multiple deployments found for gateway, using first", + "gateway_id", gatewayID, + "count", len(deployments.Items), + ) +} + // Use the first (and should be only) Deployment deployment := &deployments.Items[0]go/apps/krane/backend/kubernetes/deployment_get.go (1)
47-52: Consider logging a warning if multiple StatefulSets match.The comment notes there "should be only" one StatefulSet, but if multiple exist due to a bug, this would silently use just the first one.
if len(statefulSets.Items) == 0 { return nil, connect.NewError(connect.CodeNotFound, fmt.Errorf("deployment not found: %s", deploymentID)) } +if len(statefulSets.Items) > 1 { + k.logger.Warn("multiple statefulsets found for deployment", "deployment_id", deploymentID, "count", len(statefulSets.Items)) +} + // Use the first (and should be only) StatefulSet sfs := &statefulSets.Items[0]go/apps/ctrl/workflows/project/create.go (2)
42-47: Consider checkingValidexplicitly and addingrestate.WithName.Using
.Stringon asql.NullStringwithout checking.Validworks (returns empty string if null), but is not idiomatic. Also, thisrestate.Runcall lacksWithNamefor consistency with other calls.-k8sNamespace := workspace.K8sNamespace.String +k8sNamespace := "" +if workspace.K8sNamespace.Valid { + k8sNamespace = workspace.K8sNamespace.String +} // This should really be in a dedicated createWorkspace call I think, // but this works for now if k8sNamespace == "" { - k8sNamespace, err = restate.Run(ctx, func(runCtx restate.RunContext) (string, error) { + k8sNamespace, err = restate.Run(ctx, func(runCtx restate.RunContext) (string, error) { name := uid.Nano(12)Add
WithNameoption:- }) + }, restate.WithName("init k8s namespace"))
55-61: Good use of atomic check-and-set pattern, but error message could be clearer.The SQL condition
k8s_namespace is nullprevents race conditions. However, ifaffected != 1, it could mean another concurrent request already set the namespace. Consider re-fetching the workspace instead of returning an error.if affected != 1 { - return "", errors.New("failed to update workspace k8s namespace") + // Another request may have set the namespace concurrently; re-fetch + ws, err := db.Query.FindWorkspaceByID(runCtx, s.db.RW(), workspace.ID) + if err != nil { + return "", fmt.Errorf("failed to re-fetch workspace: %w", err) + } + if ws.K8sNamespace.Valid && ws.K8sNamespace.String != "" { + return ws.K8sNamespace.String, nil + } + return "", errors.New("failed to update workspace k8s namespace") }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
go/go.sumis excluded by!**/*.sum
📒 Files selected for processing (25)
apps/api/src/pkg/testutil/harness.ts(2 hunks)apps/dashboard/lib/trpc/routers/workspace/create.ts(1 hunks)go/apps/ctrl/workflows/project/create.go(4 hunks)go/apps/krane/backend/kubernetes/deployment_create.go(7 hunks)go/apps/krane/backend/kubernetes/deployment_delete.go(1 hunks)go/apps/krane/backend/kubernetes/deployment_get.go(4 hunks)go/apps/krane/backend/kubernetes/gateway_create.go(7 hunks)go/apps/krane/backend/kubernetes/gateway_delete.go(1 hunks)go/apps/krane/backend/kubernetes/gateway_get.go(4 hunks)go/apps/krane/backend/kubernetes/id.go(0 hunks)go/apps/krane/backend/kubernetes/krane.go(1 hunks)go/apps/krane/backend/kubernetes/labels/labels.go(1 hunks)go/go.mod(5 hunks)go/pkg/db/key_find_live_by_hash.sql_generated.go(3 hunks)go/pkg/db/key_find_live_by_id.sql_generated.go(3 hunks)go/pkg/db/models_generated.go(1 hunks)go/pkg/db/querier_generated.go(5 hunks)go/pkg/db/queries/workspace_update_k8s_namespace.sql(1 hunks)go/pkg/db/schema.sql(2 hunks)go/pkg/db/workspace_find_by_id.sql_generated.go(2 hunks)go/pkg/db/workspace_list.sql_generated.go(3 hunks)go/pkg/db/workspace_update_k8s_namespace.sql_generated.go(1 hunks)go/pkg/uid/nanoid.go(1 hunks)go/pkg/uid/nanoid_test.go(1 hunks)internal/db/src/schema/workspaces.ts(1 hunks)
💤 Files with no reviewable changes (1)
- go/apps/krane/backend/kubernetes/id.go
🧰 Additional context used
🧠 Learnings (11)
📓 Common learnings
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 4190
File: go/internal/services/keys/verifier.go:51-53
Timestamp: 2025-10-30T15:10:52.743Z
Learning: PR #4190 for unkeyed/unkey is focused solely on database schema and query changes for identity-based credits. It adds IdentityCredits and KeyCredits fields to structs and queries, but does not implement the priority enforcement logic in the usagelimiter. The logic implementation is intentionally deferred to a later PR in the stack.
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 2872
File: apps/dashboard/lib/trpc/routers/ratelimit/createNamespace.ts:36-39
Timestamp: 2025-04-08T09:34:24.576Z
Learning: When querying or updating namespaces in the Unkey dashboard, always scope the operations to the current workspace using `eq(table.workspaceId, ctx.workspace.id)` to prevent cross-workspace access.
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3755
File: .github/workflows/runbook-freshness-check.yaml:157-173
Timestamp: 2025-08-08T14:59:52.283Z
Learning: Repo unkeyed/unkey: When a CI/workflow fix is deferred, imeyer prefers a thorough GitHub issue be opened with sections (Summary, Impact, Where, Repro, Observed vs Expected, Acceptance Criteria, Validation Plan, Out of Scope, References) and assigned to imeyer, including backlinks to the originating PR and comment.
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 2872
File: apps/dashboard/lib/trpc/routers/ratelimit/createNamespace.ts:36-39
Timestamp: 2025-04-08T09:34:24.576Z
Learning: In the Unkey dashboard, when making database queries involving workspaces, use `ctx.workspace.id` directly instead of fetching the workspace separately for better performance and security.
Learnt from: chronark
Repo: unkeyed/unkey PR: 2180
File: apps/dashboard/lib/constants/workspace-navigations.tsx:56-118
Timestamp: 2024-10-04T20:44:38.489Z
Learning: When typing the `workspace` parameter in functions like `createWorkspaceNavigation`, prefer importing the `Workspace` type from the database module and picking the necessary keys (e.g., `features`) instead of redefining the interface.
📚 Learning: 2025-04-08T09:34:24.576Z
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 2872
File: apps/dashboard/lib/trpc/routers/ratelimit/createNamespace.ts:36-39
Timestamp: 2025-04-08T09:34:24.576Z
Learning: When querying or updating namespaces in the Unkey dashboard, always scope the operations to the current workspace using `eq(table.workspaceId, ctx.workspace.id)` to prevent cross-workspace access.
Applied to files:
internal/db/src/schema/workspaces.tsapps/dashboard/lib/trpc/routers/workspace/create.tsgo/pkg/db/queries/workspace_update_k8s_namespace.sqlgo/pkg/db/workspace_list.sql_generated.gogo/pkg/db/querier_generated.goapps/api/src/pkg/testutil/harness.tsgo/pkg/db/workspace_update_k8s_namespace.sql_generated.gogo/pkg/db/key_find_live_by_id.sql_generated.gogo/pkg/db/key_find_live_by_hash.sql_generated.go
📚 Learning: 2024-10-04T20:44:38.489Z
Learnt from: chronark
Repo: unkeyed/unkey PR: 2180
File: apps/dashboard/lib/constants/workspace-navigations.tsx:56-118
Timestamp: 2024-10-04T20:44:38.489Z
Learning: When typing the `workspace` parameter in functions like `createWorkspaceNavigation`, prefer importing the `Workspace` type from the database module and picking the necessary keys (e.g., `features`) instead of redefining the interface.
Applied to files:
internal/db/src/schema/workspaces.tsapps/dashboard/lib/trpc/routers/workspace/create.tsgo/pkg/db/workspace_find_by_id.sql_generated.gogo/pkg/db/workspace_list.sql_generated.gogo/pkg/db/key_find_live_by_hash.sql_generated.go
📚 Learning: 2025-09-23T17:39:59.820Z
Learnt from: perkinsjr
Repo: unkeyed/unkey PR: 4009
File: apps/dashboard/app/(app)/[workspace]/apis/[apiId]/_overview/components/table/components/override-indicator.tsx:88-97
Timestamp: 2025-09-23T17:39:59.820Z
Learning: The useWorkspaceNavigation hook in the Unkey dashboard guarantees that a workspace exists. If no workspace is found, the hook redirects the user to create a new workspace. Users cannot be logged in without a workspace, and new users must create one to continue. Therefore, workspace will never be null when using this hook.
Applied to files:
apps/dashboard/lib/trpc/routers/workspace/create.ts
📚 Learning: 2025-04-08T09:34:24.576Z
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 2872
File: apps/dashboard/lib/trpc/routers/ratelimit/createNamespace.ts:36-39
Timestamp: 2025-04-08T09:34:24.576Z
Learning: In the Unkey dashboard, when making database queries involving workspaces, use `ctx.workspace.id` directly instead of fetching the workspace separately for better performance and security.
Applied to files:
go/pkg/db/queries/workspace_update_k8s_namespace.sqlgo/pkg/db/workspace_find_by_id.sql_generated.gogo/pkg/db/workspace_list.sql_generated.gogo/pkg/db/querier_generated.gogo/pkg/db/key_find_live_by_id.sql_generated.gogo/pkg/db/key_find_live_by_hash.sql_generated.go
📚 Learning: 2025-09-01T15:10:44.959Z
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3899
File: go/proto/metald/v1/deployment.proto:7-15
Timestamp: 2025-09-01T15:10:44.959Z
Learning: In the unkey/unkey repository, the metald service receives deployment_id values from upstream services rather than generating them internally. The deployment_id field in DeploymentRequest is part of a service-to-service communication pattern.
Applied to files:
go/apps/krane/backend/kubernetes/deployment_create.gogo/apps/krane/backend/kubernetes/labels/labels.gogo/apps/krane/backend/kubernetes/gateway_create.gogo/apps/krane/backend/kubernetes/deployment_get.go
📚 Learning: 2025-10-15T10:12:40.810Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 4098
File: go/proto/ctrl/v1/deployment.proto:33-36
Timestamp: 2025-10-15T10:12:40.810Z
Learning: In the Unkey codebase proto files (ctrl/v1/build.proto, ctrl/v1/deployment.proto, hydra/v1/deployment.proto), use `dockerfile_path` (not `docker_file_path`) for consistency in generated Go field names.
Applied to files:
go/apps/krane/backend/kubernetes/deployment_create.gogo/apps/krane/backend/kubernetes/labels/labels.go
📚 Learning: 2025-09-11T08:17:21.423Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3944
File: go/apps/ctrl/services/deployment/fallbacks/k8s.go:236-247
Timestamp: 2025-09-11T08:17:21.423Z
Learning: The K8s fallback backend in go/apps/ctrl/services/deployment/fallbacks/k8s.go is specifically designed for scenarios where the ctrl service runs inside a Kubernetes cluster. It should not be used when ctrl runs outside the cluster - in those cases, the Docker fallback should be used instead.
Applied to files:
go/apps/krane/backend/kubernetes/deployment_create.go
📚 Learning: 2025-07-16T09:18:45.379Z
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 3564
File: go/cmd/cli/commands/deploy/deploy.go:153-158
Timestamp: 2025-07-16T09:18:45.379Z
Learning: In the go/cmd/cli/commands/deploy/ CLI codebase, ogzhanolguncu prefers to allow deployment to continue even when Docker push fails (around lines 153-158 in deploy.go) because the team is working locally and needs this behavior for local development workflows where registry access might not be available.
Applied to files:
go/go.mod
📚 Learning: 2025-09-12T08:01:20.792Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3944
File: go/pkg/db/acme_challenge_update_verified_with_expiry.sql_generated.go:31-39
Timestamp: 2025-09-12T08:01:20.792Z
Learning: Do not review or suggest changes to files with sql_generated.go suffix or other files marked as auto-generated (containing "Code generated by" comments), as these are generated by tools like sqlc and changes would be overwritten on regeneration.
Applied to files:
go/pkg/db/workspace_update_k8s_namespace.sql_generated.go
📚 Learning: 2025-09-01T08:29:10.199Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3895
File: go/pkg/db/key_list_live_by_auth_id.sql_generated.go:98-105
Timestamp: 2025-09-01T08:29:10.199Z
Learning: In Unkey's ListLiveKeysByKeyAuthID query, adding ka.workspace_id = k.workspace_id constraint negatively impacts index performance. Workspace validation is handled upstream at the API level before the query is called, making the additional constraint unnecessary.
Applied to files:
go/pkg/db/key_find_live_by_id.sql_generated.gogo/pkg/db/key_find_live_by_hash.sql_generated.go
🧬 Code graph analysis (13)
go/pkg/db/models_generated.go (1)
go/pkg/db/types/null_string.go (1)
NullString(10-10)
go/pkg/uid/nanoid_test.go (1)
go/pkg/uid/nanoid.go (1)
Nano(49-67)
go/apps/krane/backend/kubernetes/deployment_create.go (1)
go/apps/krane/backend/kubernetes/labels/labels.go (2)
DeploymentID(5-5)ManagedBy(8-8)
go/apps/krane/backend/kubernetes/gateway_create.go (1)
go/apps/krane/backend/kubernetes/labels/labels.go (2)
ManagedBy(8-8)GatewayID(6-6)
go/pkg/db/workspace_list.sql_generated.go (1)
go/pkg/db/models_generated.go (1)
Workspace(1012-1031)
go/apps/krane/backend/kubernetes/gateway_get.go (1)
go/apps/krane/backend/kubernetes/labels/labels.go (2)
GatewayID(6-6)ManagedBy(8-8)
go/apps/krane/backend/kubernetes/deployment_get.go (1)
go/apps/krane/backend/kubernetes/labels/labels.go (2)
DeploymentID(5-5)ManagedBy(8-8)
go/pkg/db/querier_generated.go (2)
go/pkg/db/interface.go (1)
DBTX(29-34)go/pkg/db/workspace_update_k8s_namespace.sql_generated.go (1)
UpdateWorkspaceK8sNamespaceParams(19-22)
go/apps/krane/backend/kubernetes/gateway_delete.go (2)
go/gen/proto/krane/v1/gateway.pb.go (6)
DeleteGatewayRequest(344-350)DeleteGatewayRequest(363-363)DeleteGatewayRequest(378-380)DeleteGatewayResponse(396-400)DeleteGatewayResponse(413-413)DeleteGatewayResponse(428-430)go/apps/krane/backend/kubernetes/labels/labels.go (2)
GatewayID(6-6)ManagedBy(8-8)
go/apps/ctrl/workflows/project/create.go (9)
go/pkg/uid/nanoid.go (1)
Nano(49-67)go/pkg/db/queries.go (1)
Query(29-29)go/pkg/db/workspace_update_k8s_namespace.sql_generated.go (1)
UpdateWorkspaceK8sNamespaceParams(19-22)go/pkg/db/types/null_string.go (1)
NullString(10-10)go/apps/krane/backend/kubernetes/service.go (1)
New(39-58)go/apps/ctrl/workflows/project/service.go (1)
New(31-38)go/pkg/db/project_insert.sql_generated.go (1)
InsertProjectParams(29-39)go/pkg/db/environment_insert.sql_generated.go (1)
InsertEnvironmentParams(28-37)apps/dashboard/gen/proto/krane/v1/gateway_pb.ts (1)
GatewayRequest(21-56)
go/pkg/db/workspace_update_k8s_namespace.sql_generated.go (3)
go/pkg/db/types/null_string.go (1)
NullString(10-10)go/pkg/db/queries.go (1)
Queries(3-3)go/pkg/db/interface.go (1)
DBTX(29-34)
go/pkg/db/key_find_live_by_id.sql_generated.go (1)
go/pkg/db/models_generated.go (1)
Workspace(1012-1031)
go/pkg/db/key_find_live_by_hash.sql_generated.go (1)
go/pkg/db/models_generated.go (1)
Workspace(1012-1031)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
- GitHub Check: Build / Build
- GitHub Check: Test API / API Test Local
- GitHub Check: Test Packages / Test
- GitHub Check: Test Dashboard / Test Dashboard
- GitHub Check: Test Go API Local / Test
- GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (24)
go/pkg/uid/nanoid.go (1)
1-67: LGTM! Clear security documentation.The implementation is correct and the security warnings are appropriately prominent. The function uses
math/rand/v2which is suitable for internal identifiers like Kubernetes namespaces.Note: With 8 characters from a 36-character alphabet (36^8 ≈ 2.8 trillion possibilities), collision probability becomes non-negligible after ~1-2 million workspaces due to the birthday paradox. The unique constraint on
k8s_namespacein the database schema (line 211 ofgo/pkg/db/schema.sql) provides the necessary collision protection.go/pkg/uid/nanoid_test.go (1)
1-91: LGTM! Comprehensive test coverage.The tests cover all essential cases: default behavior, custom lengths, edge cases (zero length), and uniqueness smoke testing. The validation of character membership in
nanoAlphabetensures correctness.apps/dashboard/lib/trpc/routers/workspace/create.ts (1)
84-84: LGTM! Correct initialization.Initializing
k8sNamespacetonullis appropriate here. Based on the PR objectives, the namespace will be generated when a project is created within the workspace, not during workspace creation itself.go/pkg/db/models_generated.go (1)
1018-1018: LGTM! Generated code is correct.The
K8sNamespacefield addition is properly generated with the correct type (sql.NullString) and db tag. The positioning afterPartitionIDmatches the schema definition.go/pkg/db/schema.sql (1)
195-211: LGTM! Schema changes are well-designed.The column definition is correct:
varchar(63)matches Kubernetes namespace naming limits (RFC 1123 DNS label)- Nullable allows assignment after workspace creation
- UNIQUE constraint prevents namespace collisions, which is the core purpose of this PR
apps/api/src/pkg/testutil/harness.ts (2)
272-272: LGTM! Test fixture updated correctly.The
k8sNamespace: nullinitialization is consistent with the production workspace creation code and properly reflects the nullable nature of the field.
292-292: LGTM! Consistent initialization.Both test workspaces correctly initialize
k8sNamespacetonull, maintaining consistency across test fixtures.go/pkg/db/workspace_find_by_id.sql_generated.go (1)
13-44: LGTM! Generated query code is correct.The
k8s_namespacecolumn is properly included in the SELECT statement and the row scanning logic. The field ordering matches the schema definition.internal/db/src/schema/workspaces.ts (1)
31-32: LGTM! Kubernetes namespace column properly defined.The
varchar(63)length correctly aligns with Kubernetes namespace naming limits (RFC 1123 DNS label), and the unique constraint addresses the collision prevention goal stated in the PR objectives.go/pkg/db/queries/workspace_update_k8s_namespace.sql (1)
1-5: Good idempotent update pattern.The
k8s_namespace is nullcondition ensures existing namespace assignments are never overwritten. Callers should checkRowsAffected()from thesql.Resultto distinguish between a successful update and a no-op when the namespace was already set.go/pkg/db/key_find_live_by_hash.sql_generated.go (1)
18-18: Generated code correctly includes k8s_namespace.The SELECT column addition and corresponding scan at line 286 are properly aligned with the updated
Workspacestruct.go/pkg/db/workspace_list.sql_generated.go (1)
14-14: Generated code correctly propagates k8s_namespace.The column addition is consistent across the SELECT statement, documentation comment, and scan operations.
go/pkg/db/key_find_live_by_id.sql_generated.go (1)
18-18: Generated code correctly includes k8s_namespace.Consistent with the schema changes and other generated query files.
go/apps/krane/backend/kubernetes/gateway_delete.go (1)
15-31: Label-based deletion is a robust improvement.The shift from name-based to label-based resource selection follows Kubernetes best practices and provides more reliable cleanup. The selector correctly combines
GatewayIDandManagedBylabels for precise targeting.go/apps/krane/backend/kubernetes/deployment_delete.go (1)
46-55: LGTM on the service deletion loop.The error handling correctly ignores
IsNotFounderrors, which is appropriate since another process or garbage collection could have already deleted the resource. The per-resource logging provides good observability.go/pkg/db/querier_generated.go (1)
2090-2095: LGTM - idempotent update pattern.The
WHERE id = ? and k8s_namespace is nullcondition ensures the namespace is only set once, preventing accidental overwrites. This is the correct pattern for a one-time migration/initialization of the new column.go/apps/krane/backend/kubernetes/gateway_create.go (1)
57-63: Good use of GenerateName and centralized labels.Using
GenerateNameinstead of a fixed name avoids naming collisions when multiple gateways exist in a namespace. The label-based identification pattern withlabels.GatewayIDis a solid approach for resource management.go/apps/krane/backend/kubernetes/gateway_get.go (2)
36-36: LGTM on the label selector construction.Using
fmt.Sprintf("%s=%s", labels.GatewayID, gatewayID)is consistent with the label constants defined in the labels package.
77-79: Appropriate error classification for missing service.Returning
CodeInternalwhen the deployment exists but the service doesn't is correct. This indicates an inconsistent state (the OwnerReference should have ensured the service exists), not that the gateway doesn't exist.go/apps/krane/backend/kubernetes/deployment_create.go (1)
70-79: LGTM on the namespace and deploymentID extraction.The refactoring to use namespace from the request and centralized labels for resource identification aligns well with the PR's objective of preventing namespace collisions.
go/pkg/db/workspace_update_k8s_namespace.sql_generated.go (1)
1-31: Skipping review of auto-generated file.This file is generated by sqlc and should not be manually modified. Based on learnings, files with
sql_generated.gosuffix are excluded from review.go/apps/ctrl/workflows/project/create.go (2)
139-156: LGTM on gateway provisioning withk8sNamespace.The gateway creation correctly uses the workspace's
k8sNamespaceinstead of the workspace ID, which aligns with the PR objective of preventing namespace collisions.
47-47: No issue found:uid.Nanoproduces Kubernetes-valid namespace names.The
nanoAlphabetconstant (line 10 ofgo/pkg/uid/nanoid.go) is defined as"abcdefghijklmnopqrstuvwxyz0123456789"— containing only lowercase letters and digits. This generates 12-character strings that fully comply with Kubernetes namespace requirements (lowercase alphanumeric, max 63 characters). The original concern about invalid characters is unfounded.go/go.mod (1)
20-20: Dependency updates appear incidental to this PR, but verify restatedev/sdk-go v0.22.0 compatibility.These go.mod changes were pulled in from the main branch merge and aren't directly related to the k8s_namespace column feature. Docker/docker v28.5.1 is safe—it contains no breaking Go API changes, only Go runtime and field deprecations. However, restatedev/sdk-go v0.22.0 could not be verified via web search (no public release notes found), and it is actively used in your protobuf-generated Restate integration code. Before merging, confirm that v0.22.0 is compatible with your Restate setup and doesn't introduce breaking changes to the encoding or ingress modules used in
go/gen/proto/hydra/v1/*_restate.pb.go. The golang.org/x and OpenTelemetry updates are routine maintenance and safe.
|
|
||
| namespace := req.Msg.GetDeployment().GetNamespace() | ||
| deploymentID := req.Msg.GetDeployment().GetDeploymentId() | ||
| const krane = "krane" |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Extract krane constant to package level for reuse.
This constant is duplicated/missing in deployment_get.go. Based on the AI summary, there should be a shared constant in go/apps/krane/backend/kubernetes/krane.go. Consider using that or moving this to package level.
-func (k *k8s) CreateDeployment(...) {
- const krane = "krane"
+const krane = "krane"
+
+func (k *k8s) CreateDeployment(...) {Or import from the shared location if krane.go exports it:
#!/bin/bash
# Check if krane constant exists in krane.go
cat go/apps/krane/backend/kubernetes/krane.go 2>/dev/null || echo "File not found"🤖 Prompt for AI Agents
In go/apps/krane/backend/kubernetes/deployment_create.go around line 74, the
local const `krane` is duplicated elsewhere; extract it to the package level or
reuse the shared constant in go/apps/krane/backend/kubernetes/krane.go: either
move the const declaration out of the function to the top of the package (so
other files can reference it) or, if krane.go already defines and exports the
constant, import/use that exported name instead; update deployment_create.go to
reference the package-level constant and remove the local duplicate, and adjust
visibility/name (exported vs unexported) in krane.go as needed so all files
compile.
| Containers: []corev1.Container{ | ||
| { | ||
| Name: "todo", | ||
|
|
||
| Image: req.Msg.GetDeployment().GetImage(), |
There was a problem hiding this comment.
Missing required Name field for container.
Kubernetes requires every container to have a Name field. Without it, the StatefulSet creation will fail with a validation error.
Containers: []corev1.Container{
{
-
+ Name: "app",
Image: req.Msg.GetDeployment().GetImage(),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| Containers: []corev1.Container{ | |
| { | |
| Name: "todo", | |
| Image: req.Msg.GetDeployment().GetImage(), | |
| Containers: []corev1.Container{ | |
| { | |
| Name: "app", | |
| Image: req.Msg.GetDeployment().GetImage(), |
🤖 Prompt for AI Agents
In go/apps/krane/backend/kubernetes/deployment_create.go around lines 172-175,
the corev1.Container literal is missing the required Name field which will cause
Kubernetes validation to fail; set the container Name to a valid DNS-1123 label
derived from the deployment name (e.g. sanitize and lower-case
req.Msg.GetDeployment().GetName() and append "-container"), with a safe fallback
like "container" if the deployment name is empty, and ensure the resulting
string conforms to Kubernetes naming rules before assigning it to the
Container.Name field.
| // Create label selector for this deployment | ||
| labelSelector := fmt.Sprintf("deployment-id=%s", deploymentID) |
There was a problem hiding this comment.
Use the centralized label constant for consistency.
The label selector uses the hard-coded string "deployment-id" but the new labels package defines DeploymentID = "unkey.com/deployment.id". This inconsistency means resources created with the new label scheme won't be found during deletion.
+import "github.com/unkeyed/unkey/go/apps/krane/backend/kubernetes/labels"
+
// Create label selector for this deployment
-labelSelector := fmt.Sprintf("deployment-id=%s", deploymentID)
+labelSelector := fmt.Sprintf("%s=%s", labels.DeploymentID, deploymentID)Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In go/apps/krane/backend/kubernetes/deployment_delete.go around lines 29-30, the
label selector is built using the hard-coded "deployment-id" string which is
inconsistent with the centralized labels package; replace the hard-coded literal
with the labels.DeploymentID constant (e.g. fmt.Sprintf("%s=%s",
labels.DeploymentID, deploymentID)) and add the labels package to the file's
import list so deletion uses the same label key as resource creation.
|
|
||
| k.logger.Info("getting deployment", "deployment_id", k8sDeploymentID) | ||
| // Create label selector for this deployment | ||
| labelSelector := fmt.Sprintf("%s=%s,%s=%s", labels.DeploymentID, deploymentID, labels.ManagedBy, krane) |
There was a problem hiding this comment.
Undefined variable krane will cause compilation failure.
The variable krane is used here but is not declared in this file. In deployment_create.go, it's defined as a local constant inside the function (const krane = "krane"), making it inaccessible here.
Either import from a shared location or define the constant in this file:
+const krane = "krane"
+
func (k *k8s) GetDeployment(ctx context.Context, req *connect.Request[kranev1.GetDeploymentRequest]) (*connect.Response[kranev1.GetDeploymentResponse], error) {Alternatively, verify if there's a shared constant in go/apps/krane/backend/kubernetes/krane.go that should be used here.
#!/bin/bash
# Check if krane constant is defined elsewhere
rg -n "const krane" --type=go
ast-grep --pattern 'const krane = $_'🤖 Prompt for AI Agents
In go/apps/krane/backend/kubernetes/deployment_get.go around line 36 the
identifier `krane` is undefined and will cause a compile error; fix by either
adding a file-scoped constant `const krane = "krane"` near the top of this file
or (preferred) import and use the shared constant from
go/apps/krane/backend/kubernetes/krane.go (e.g., replace `krane` with the
exported constant name), then rebuild to ensure the labelSelector line compiles.
| // List and delete Services with this gateway-id label | ||
| //nolint: exhaustruct | ||
| serviceList, err := k.clientset.CoreV1().Services(namespace).List(ctx, metav1.ListOptions{ | ||
| LabelSelector: labelSelector, | ||
| }) | ||
| if err != nil && !apierrors.IsNotFound(err) { | ||
| return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to delete deployment: %w", err)) | ||
| if err != nil { | ||
| return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to list services: %w", err)) | ||
| } | ||
|
|
||
| for _, service := range serviceList.Items { | ||
| k.logger.Debug("deleting service", | ||
| "name", service.Name, | ||
| "gateway_id", gatewayID, | ||
| ) | ||
| err = k.clientset.CoreV1().Services(namespace).Delete(ctx, service.Name, deleteOptions) | ||
| if err != nil && !apierrors.IsNotFound(err) { | ||
| return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to delete service %s: %w", service.Name, err)) | ||
| } | ||
| } | ||
|
|
||
| // List and delete Deployments with this gateway-id label | ||
| //nolint: exhaustruct | ||
| deploymentList, err := k.clientset.AppsV1().Deployments(namespace).List(ctx, metav1.ListOptions{ | ||
| LabelSelector: labelSelector, | ||
| }) | ||
| if err != nil { | ||
| return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to list deployments: %w", err)) | ||
| } | ||
|
|
||
| for _, deployment := range deploymentList.Items { | ||
| k.logger.Debug("deleting deployment", | ||
| "name", deployment.Name, | ||
| "gateway_id", gatewayID, | ||
| ) | ||
| err = k.clientset.AppsV1().Deployments(namespace).Delete(ctx, deployment.Name, deleteOptions) | ||
| if err != nil && !apierrors.IsNotFound(err) { | ||
| return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to delete deployment %s: %w", deployment.Name, err)) | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for Kubernetes resource types created during gateway creation
rg -n "clientset\.(CoreV1|AppsV1|NetworkingV1|RbacV1)" go/apps/krane/backend/kubernetes/gateway_create.go -A 3 -B 1Repository: unkeyed/unkey
Length of output: 1329
🏁 Script executed:
#!/bin/bash
# Look for any Create calls in gateway_create.go
rg -n "\.Create\(" go/apps/krane/backend/kubernetes/gateway_create.go -B 2 -A 2Repository: unkeyed/unkey
Length of output: 482
🏁 Script executed:
#!/bin/bash
# Get the complete gateway_delete.go to understand full deletion logic
wc -l go/apps/krane/backend/kubernetes/gateway_delete.go && cat -n go/apps/krane/backend/kubernetes/gateway_delete.goRepository: unkeyed/unkey
Length of output: 3756
🏁 Script executed:
#!/bin/bash
# Get the full gateway_create.go to see all resources created
wc -l go/apps/krane/backend/kubernetes/gateway_create.go && cat -n go/apps/krane/backend/kubernetes/gateway_create.goRepository: unkeyed/unkey
Length of output: 7556
Delete the Namespace resource to prevent resource leaks.
The deletion logic is missing cleanup of the Namespace resource, which is conditionally created during gateway creation (line 35-46 in gateway_create.go). The namespace has the managed-by: krane label and will accumulate indefinitely without deletion. Regarding ConfigMaps, Secrets, Ingress, and ServiceAccounts: these resource types are not created during gateway provisioning, so the current deletion of Services and Deployments is complete for all created resources.
Add namespace deletion to DeleteGateway similar to the existing Service and Deployment deletion patterns, using the namespace name directly rather than label selectors.
🤖 Prompt for AI Agents
In go/apps/krane/backend/kubernetes/gateway_delete.go around lines 38 to 76, add
explicit Namespace deletion to avoid leaks: after deleting Services and
Deployments call k.clientset.CoreV1().Namespaces().Delete(ctx, namespace,
deleteOptions) (use the namespace string directly, not a label selector), log
the deletion attempt with k.logger.Debug("deleting namespace", "name",
namespace, "gateway_id", gatewayID), and handle errors the same way as other
deletes—ignore apierrors.IsNotFound(err) but return
connect.NewError(connect.CodeInternal, fmt.Errorf("failed to delete namespace
%s: %w", namespace, err)) for other errors.
| package labels | ||
|
|
||
| // Labels | ||
| const ( | ||
| DeploymentID = "unkey.com/deployment.id" | ||
| GatewayID = "unkey.com/gateway.id" | ||
| GatewayVersion = "unkey.com/gateway.version" | ||
| ManagedBy = "app.kubernetes.io/managed-by" | ||
| ) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Search for usages of GatewayVersion label constant
rg -n "GatewayVersion" --type go -C 3Repository: unkeyed/unkey
Length of output: 558
GatewayVersion is unused and should be removed or deployed.
Verification confirms that the GatewayVersion constant is defined but never referenced anywhere in the codebase. Either use this constant in the deployment logic or remove it to avoid maintaining dead code.
🤖 Prompt for AI Agents
In go/apps/krane/backend/kubernetes/labels/labels.go around lines 1 to 9, the
GatewayVersion constant is defined but never used; remove the unused constant or
reference it where gateway image/version labels are set. To fix: either delete
the GatewayVersion line from the const block to eliminate dead code, or update
the deployment/manifest-labeling code (where other labels like GatewayID or
DeploymentID are applied) to use labels.GatewayVersion when populating gateway
version metadata and ensure imports compile. Make the minimal change consistent
with existing label usage and run `go vet`/`go build` to confirm no unused
symbol remains.

Kubernetes namespaces must be lowercase alphanumeric, our existing workspace_id is not suitable.
Worst case is that 2 workspace_ids would be lowercased to the same string and 2 users would share a kubernetes namespace.
What does this PR do?
k8sNamespacefield to the workspace modelFixes #1234
Type of change
How should this be tested?
Checklist
Required
pnpm buildpnpm fmtmake fmton/godirectoryconsole.logsgit pull origin mainAppreciated