Skip to content

fix: use a new k8s_namespace column and use label based lookups#4436

Closed
chronark wants to merge 4 commits intomainfrom
11-28-fix_use_a_new_k8s_namespace_column_and_use_label_based_lookups
Closed

fix: use a new k8s_namespace column and use label based lookups#4436
chronark wants to merge 4 commits intomainfrom
11-28-fix_use_a_new_k8s_namespace_column_and_use_label_based_lookups

Conversation

@chronark
Copy link
Collaborator

@chronark chronark commented Nov 28, 2025

Kubernetes namespaces must be lowercase alphanumeric, our existing workspace_id is not suitable.
Worst case is that 2 workspace_ids would be lowercased to the same string and 2 users would share a kubernetes namespace.

What does this PR do?

  • Adding a k8sNamespace field to the workspace model
  • Implementing namespace generation during project creation
  • Improving Kubernetes resource management by accessing resources via labels instead of names
  • Updating deployment and gateway creation to use the workspace's namespace

Fixes #1234

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • Chore (refactoring code, technical debt, workflow improvements)
  • Enhancement (small improvements)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How should this be tested?

  • Create a new workspace and verify that it gets assigned a k8sNamespace value
  • Create a project in the workspace and verify that resources are created in the correct namespace
  • Delete a project and verify that all resources in the namespace are properly cleaned up
  • Check that existing workspaces continue to function properly

Checklist

Required

  • Filled out the "How to test" section in this PR
  • Read Contributing Guide
  • Self-reviewed my own code
  • Commented on my code in hard-to-understand areas
  • Ran pnpm build
  • Ran pnpm fmt
  • Ran make fmt on /go directory
  • Checked for warnings, there are none
  • Removed all console.logs
  • Merged the latest changes from main onto my branch with git pull origin main
  • My changes don't cause any responsiveness issues

Appreciated

  • If a UI change was made: Added a screen recording or screenshots to this PR
  • Updated the Unkey Docs if changes were necessary

@vercel
Copy link

vercel bot commented Nov 28, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
dashboard Ready Ready Preview Comment Nov 28, 2025 0:10am
1 Skipped Deployment
Project Deployment Preview Comments Updated (UTC)
engineering Ignored Ignored Preview Nov 28, 2025 0:10am

@changeset-bot
Copy link

changeset-bot bot commented Nov 28, 2025

⚠️ No Changeset found

Latest commit: bfe3236

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 28, 2025

📝 Walkthrough

Walkthrough

This change introduces Kubernetes namespace abstraction for workspaces by adding a k8s_namespace column to the database schema, initializing it during workspace creation, and refactoring Kubernetes resource operations to use label-based identification instead of name-based approaches across deployment and gateway lifecycle operations.

Changes

Cohort / File(s) Summary
Database Schema & Generated Code
go/pkg/db/schema.sql, internal/db/src/schema/workspaces.ts
Added new k8s_namespace column (varchar 63) with unique constraint to workspaces table.
Go Database Layer
go/pkg/db/models_generated.go, go/pkg/db/querier_generated.go, go/pkg/db/workspace_*.sql_generated.go, go/pkg/db/key_find_live_*.sql_generated.go, go/pkg/db/queries/workspace_update_k8s_namespace.sql
Added K8sNamespace field to Workspace struct; introduced UpdateWorkspaceK8sNamespace query and parameter types; updated all workspace SELECT queries to include k8s_namespace column.
TypeScript Workspace Initialization
apps/api/src/pkg/testutil/harness.ts, apps/dashboard/lib/trpc/routers/workspace/create.ts
Initialized k8s_namespace field to null in workspace creation.
Go Project Creation Workflow
go/apps/ctrl/workflows/project/create.go
Added on-demand k8s_namespace initialization on first project creation; expanded project insertion parameters (Slug, GitRepositoryUrl, DefaultBranch, timestamps); replaced restate.Run with restate.RunVoid for insertions; updated gateway provisioning to use k8s_namespace instead of workspace ID.
Kubernetes Label Constants
go/apps/krane/backend/kubernetes/labels/labels.go, go/apps/krane/backend/kubernetes/krane.go
Introduced centralized label constants (DeploymentID, GatewayID, GatewayVersion, ManagedBy); added krane package constant.
Kubernetes Deployment Operations
go/apps/krane/backend/kubernetes/deployment_create.go, go/apps/krane/backend/kubernetes/deployment_delete.go, go/apps/krane/backend/kubernetes/deployment_get.go
Refactored to use label-based resource identification and GenerateName instead of explicit names; replaced single name-based deletions with label-based list and delete; updated all selectors and labels to reference centralized constants.
Kubernetes Gateway Operations
go/apps/krane/backend/kubernetes/gateway_create.go, go/apps/krane/backend/kubernetes/gateway_delete.go, go/apps/krane/backend/kubernetes/gateway_get.go
Refactored to use label-based resource identification, GenerateName for naming, and centralized label constants; updated service/deployment selectors and ownership references accordingly.
Kubernetes Utility
go/apps/krane/backend/kubernetes/id.go
Deleted safeIDForK8s function (no longer needed with label-based approach).
ID Generation
go/pkg/uid/nanoid.go, go/pkg/uid/nanoid_test.go
Added Nano() function for generating short URL-safe alphanumeric strings (8 chars default); included unit tests for length, uniqueness, and prefixing.
Dependency Updates
go/go.mod
Updated versions: docker/docker (v28.4.0→v28.5.1), restatedev/sdk-go (v0.21.0→v0.22.0), golang.org/x/* packages, and transitive dependencies.

Sequence Diagram

sequenceDiagram
    participant Client
    participant API as API Layer
    participant DB as Database
    participant Ctrl as Control Workflow
    participant Krane as Krane (K8s)

    Client->>API: CreateWorkspace
    API->>DB: Insert workspace (k8s_namespace = null)
    DB-->>API: workspace created

    Client->>Ctrl: CreateProject (first for workspace)
    Ctrl->>DB: Check workspace.k8s_namespace
    DB-->>Ctrl: k8s_namespace is null
    Ctrl->>Ctrl: Generate k8s_namespace (Nano ID)
    Ctrl->>DB: UpdateWorkspaceK8sNamespace
    DB-->>Ctrl: namespace initialized
    
    Ctrl->>Ctrl: Generate project ID
    Ctrl->>DB: InsertProject (with slug, repo, timestamps)
    DB-->>Ctrl: project created
    
    Ctrl->>DB: InsertEnvironments (with timestamps)
    DB-->>Ctrl: environments created
    
    loop For each environment
        Ctrl->>Krane: CreateGateway (namespace=k8s_namespace, replicas)
        Krane->>Krane: Create NS, Deployment (GenerateName), Service (labels)
        Krane-->>Ctrl: gateway created
    end
    
    Ctrl-->>Client: project + environments + gateways
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

  • Database schema and generated code migrations: Verify schema changes, unique constraint placement, and consistency across all generated query files (workspace reads, updates, and foreign key dependencies)
  • Label-based refactoring across krane operations: Multiple files (deployment_* and gateway_*) undergo similar but distinct refactoring; verify label selectors are correctly constructed, GenerateName usage is consistent, and service ownership references are properly updated
  • Project creation workflow changes: Confirm k8s_namespace initialization logic (null check, error handling), integration with restate.RunVoid changes, and parameter expansion for project insertion
  • Cross-layer consistency: Ensure TypeScript initialization aligns with Go database layer and that workspace k8s_namespace flows correctly through project creation to Krane operations
  • Dependency updates: Verify compatibility of SDK and library upgrades, particularly restatedev/sdk-go version bump and its interaction with restate.RunVoid patterns

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Linked Issues check ⚠️ Warning The PR references issue #1234 titled 'feat: make playground responsive', which is unrelated to the PR's actual focus on k8sNamespace implementation and label-based Kubernetes resource lookups. Update the linked issue reference to match the actual scope of this PR. The current issue #1234 about playground responsiveness is not related to the Kubernetes namespace and label-based lookup changes implemented here.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title 'fix: use a new k8s_namespace column and use label based lookups' accurately describes the main changes: adding k8s_namespace column and implementing label-based resource lookups instead of name-based ones.
Description check ✅ Passed The PR description is comprehensive and well-structured, covering what the PR does, type of change, testing instructions, and a completed checklist. All major sections are addressed with specific details about k8sNamespace implementation and resource management improvements.
Out of Scope Changes check ✅ Passed The PR includes changes to multiple systems (database schema, workspace models, Kubernetes resource management, dependency updates) but all appear to support the core objective of implementing k8sNamespace and label-based lookups for workspace isolation and resource management.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 11-28-fix_use_a_new_k8s_namespace_column_and_use_label_based_lookups

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Collaborator Author

chronark commented Nov 28, 2025

@chronark chronark mentioned this pull request Nov 28, 2025
19 tasks
@vercel vercel bot temporarily deployed to Preview – engineering November 28, 2025 08:15 Inactive
@chronark chronark marked this pull request as ready for review November 28, 2025 08:16
@vercel vercel bot temporarily deployed to Preview – dashboard November 28, 2025 08:17 Inactive
Base automatically changed from 11-28-fix_dashboard_build_in_docker to main November 28, 2025 10:55
@vercel vercel bot temporarily deployed to Preview – dashboard November 28, 2025 12:10 Inactive
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
go/apps/krane/backend/kubernetes/gateway_get.go (1)

108-116: Fix Pod phase to status mapping.

Two issues in the switch statement:

  1. PodFailed maps to GATEWAY_STATUS_TERMINATING, but "terminating" implies graceful shutdown. A failed pod should map to a failure/error status or UNSPECIFIED.
  2. PodSucceeded has an empty handler with an incomplete comment, leaving podStatus as zero value.
 case corev1.PodFailed:
-	podStatus = kranev1.GatewayStatus_GATEWAY_STATUS_TERMINATING
+	podStatus = kranev1.GatewayStatus_GATEWAY_STATUS_UNSPECIFIED // or a dedicated FAILED status if available
 case corev1.PodSucceeded:
-	// Handling to handle at this point
+	podStatus = kranev1.GatewayStatus_GATEWAY_STATUS_UNSPECIFIED // Succeeded pods have completed their work
go/apps/krane/backend/kubernetes/deployment_get.go (1)

108-109: Clarify or fix the incomplete comment.

The comment "Handling to handle at this point" appears to be a typo or placeholder. If PodSucceeded intentionally falls through to use the zero value (DEPLOYMENT_STATUS_UNSPECIFIED), consider making this explicit.

 case corev1.PodSucceeded:
-	// Handling to handle at this point
+	podStatus = kranev1.DeploymentStatus_DEPLOYMENT_STATUS_UNSPECIFIED
🧹 Nitpick comments (7)
go/apps/krane/backend/kubernetes/krane.go (1)

1-3: Consider adding documentation.

This unexported constant lacks context. Please add a comment explaining its purpose and usage, especially since it appears to be related to Kubernetes namespace or resource identification.

 package kubernetes
 
+// krane is the constant identifier used for Kubernetes resources managed by Krane.
 const krane = "krane"
go/apps/krane/backend/kubernetes/gateway_delete.go (1)

78-83: Consider adjusting logged counts for accuracy.

The services_deleted and deployments_deleted counts reflect the number of resources listed, not necessarily the number deleted. If a resource is deleted between list and delete (resulting in a 404 that's silently ignored), the count would be slightly misleading. This is a minor observability nit.

If precise counts are desired, track successful deletions:

+	servicesDeleted := 0
 	for _, service := range serviceList.Items {
 		k.logger.Debug("deleting service",
 			"name", service.Name,
 			"gateway_id", gatewayID,
 		)
 		err = k.clientset.CoreV1().Services(namespace).Delete(ctx, service.Name, deleteOptions)
 		if err != nil && !apierrors.IsNotFound(err) {
 			return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to delete service %s: %w", service.Name, err))
 		}
+		if err == nil {
+			servicesDeleted++
+		}
 	}
go/apps/krane/backend/kubernetes/gateway_create.go (1)

179-181: Consider returning the service name in the response.

The response only returns the status. The caller will need to call GetGateway to discover the generated service name for constructing the DNS address. Returning the service name directly would save an extra API call.

 return connect.NewResponse(&kranev1.CreateGatewayResponse{
-	Status: kranev1.GatewayStatus_GATEWAY_STATUS_PENDING,
+	Status:  kranev1.GatewayStatus_GATEWAY_STATUS_PENDING,
+	Address: fmt.Sprintf("%s.%s.svc.cluster.local", service.Name, service.Namespace),
 }), nil

Note: This would require updating the proto definition to include an address field in CreateGatewayResponse.

go/apps/krane/backend/kubernetes/gateway_get.go (1)

47-52: Consider logging a warning when multiple deployments are found.

The code assumes a single deployment per gateway ID, taking the first result silently. If multiple deployments exist (indicating a data inconsistency), this should be logged for debugging.

 if len(deployments.Items) == 0 {
 	return nil, connect.NewError(connect.CodeNotFound, fmt.Errorf("gateway not found: %s", gatewayID))
 }

+if len(deployments.Items) > 1 {
+	k.logger.Warn("multiple deployments found for gateway, using first",
+		"gateway_id", gatewayID,
+		"count", len(deployments.Items),
+	)
+}
+
 // Use the first (and should be only) Deployment
 deployment := &deployments.Items[0]
go/apps/krane/backend/kubernetes/deployment_get.go (1)

47-52: Consider logging a warning if multiple StatefulSets match.

The comment notes there "should be only" one StatefulSet, but if multiple exist due to a bug, this would silently use just the first one.

 if len(statefulSets.Items) == 0 {
 	return nil, connect.NewError(connect.CodeNotFound, fmt.Errorf("deployment not found: %s", deploymentID))
 }

+if len(statefulSets.Items) > 1 {
+	k.logger.Warn("multiple statefulsets found for deployment", "deployment_id", deploymentID, "count", len(statefulSets.Items))
+}
+
 // Use the first (and should be only) StatefulSet
 sfs := &statefulSets.Items[0]
go/apps/ctrl/workflows/project/create.go (2)

42-47: Consider checking Valid explicitly and adding restate.WithName.

Using .String on a sql.NullString without checking .Valid works (returns empty string if null), but is not idiomatic. Also, this restate.Run call lacks WithName for consistency with other calls.

-k8sNamespace := workspace.K8sNamespace.String
+k8sNamespace := ""
+if workspace.K8sNamespace.Valid {
+	k8sNamespace = workspace.K8sNamespace.String
+}
 // This should really be in a dedicated createWorkspace call I think,
 // but this works for now
 if k8sNamespace == "" {
-	k8sNamespace, err = restate.Run(ctx, func(runCtx restate.RunContext) (string, error) {
+	k8sNamespace, err = restate.Run(ctx, func(runCtx restate.RunContext) (string, error) {
 		name := uid.Nano(12)

Add WithName option:

-	})
+	}, restate.WithName("init k8s namespace"))

55-61: Good use of atomic check-and-set pattern, but error message could be clearer.

The SQL condition k8s_namespace is null prevents race conditions. However, if affected != 1, it could mean another concurrent request already set the namespace. Consider re-fetching the workspace instead of returning an error.

 if affected != 1 {
-	return "", errors.New("failed to update workspace k8s namespace")
+	// Another request may have set the namespace concurrently; re-fetch
+	ws, err := db.Query.FindWorkspaceByID(runCtx, s.db.RW(), workspace.ID)
+	if err != nil {
+		return "", fmt.Errorf("failed to re-fetch workspace: %w", err)
+	}
+	if ws.K8sNamespace.Valid && ws.K8sNamespace.String != "" {
+		return ws.K8sNamespace.String, nil
+	}
+	return "", errors.New("failed to update workspace k8s namespace")
 }
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 210947e and bfe3236.

⛔ Files ignored due to path filters (1)
  • go/go.sum is excluded by !**/*.sum
📒 Files selected for processing (25)
  • apps/api/src/pkg/testutil/harness.ts (2 hunks)
  • apps/dashboard/lib/trpc/routers/workspace/create.ts (1 hunks)
  • go/apps/ctrl/workflows/project/create.go (4 hunks)
  • go/apps/krane/backend/kubernetes/deployment_create.go (7 hunks)
  • go/apps/krane/backend/kubernetes/deployment_delete.go (1 hunks)
  • go/apps/krane/backend/kubernetes/deployment_get.go (4 hunks)
  • go/apps/krane/backend/kubernetes/gateway_create.go (7 hunks)
  • go/apps/krane/backend/kubernetes/gateway_delete.go (1 hunks)
  • go/apps/krane/backend/kubernetes/gateway_get.go (4 hunks)
  • go/apps/krane/backend/kubernetes/id.go (0 hunks)
  • go/apps/krane/backend/kubernetes/krane.go (1 hunks)
  • go/apps/krane/backend/kubernetes/labels/labels.go (1 hunks)
  • go/go.mod (5 hunks)
  • go/pkg/db/key_find_live_by_hash.sql_generated.go (3 hunks)
  • go/pkg/db/key_find_live_by_id.sql_generated.go (3 hunks)
  • go/pkg/db/models_generated.go (1 hunks)
  • go/pkg/db/querier_generated.go (5 hunks)
  • go/pkg/db/queries/workspace_update_k8s_namespace.sql (1 hunks)
  • go/pkg/db/schema.sql (2 hunks)
  • go/pkg/db/workspace_find_by_id.sql_generated.go (2 hunks)
  • go/pkg/db/workspace_list.sql_generated.go (3 hunks)
  • go/pkg/db/workspace_update_k8s_namespace.sql_generated.go (1 hunks)
  • go/pkg/uid/nanoid.go (1 hunks)
  • go/pkg/uid/nanoid_test.go (1 hunks)
  • internal/db/src/schema/workspaces.ts (1 hunks)
💤 Files with no reviewable changes (1)
  • go/apps/krane/backend/kubernetes/id.go
🧰 Additional context used
🧠 Learnings (11)
📓 Common learnings
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 4190
File: go/internal/services/keys/verifier.go:51-53
Timestamp: 2025-10-30T15:10:52.743Z
Learning: PR #4190 for unkeyed/unkey is focused solely on database schema and query changes for identity-based credits. It adds IdentityCredits and KeyCredits fields to structs and queries, but does not implement the priority enforcement logic in the usagelimiter. The logic implementation is intentionally deferred to a later PR in the stack.
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 2872
File: apps/dashboard/lib/trpc/routers/ratelimit/createNamespace.ts:36-39
Timestamp: 2025-04-08T09:34:24.576Z
Learning: When querying or updating namespaces in the Unkey dashboard, always scope the operations to the current workspace using `eq(table.workspaceId, ctx.workspace.id)` to prevent cross-workspace access.
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3755
File: .github/workflows/runbook-freshness-check.yaml:157-173
Timestamp: 2025-08-08T14:59:52.283Z
Learning: Repo unkeyed/unkey: When a CI/workflow fix is deferred, imeyer prefers a thorough GitHub issue be opened with sections (Summary, Impact, Where, Repro, Observed vs Expected, Acceptance Criteria, Validation Plan, Out of Scope, References) and assigned to imeyer, including backlinks to the originating PR and comment.
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 2872
File: apps/dashboard/lib/trpc/routers/ratelimit/createNamespace.ts:36-39
Timestamp: 2025-04-08T09:34:24.576Z
Learning: In the Unkey dashboard, when making database queries involving workspaces, use `ctx.workspace.id` directly instead of fetching the workspace separately for better performance and security.
Learnt from: chronark
Repo: unkeyed/unkey PR: 2180
File: apps/dashboard/lib/constants/workspace-navigations.tsx:56-118
Timestamp: 2024-10-04T20:44:38.489Z
Learning: When typing the `workspace` parameter in functions like `createWorkspaceNavigation`, prefer importing the `Workspace` type from the database module and picking the necessary keys (e.g., `features`) instead of redefining the interface.
📚 Learning: 2025-04-08T09:34:24.576Z
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 2872
File: apps/dashboard/lib/trpc/routers/ratelimit/createNamespace.ts:36-39
Timestamp: 2025-04-08T09:34:24.576Z
Learning: When querying or updating namespaces in the Unkey dashboard, always scope the operations to the current workspace using `eq(table.workspaceId, ctx.workspace.id)` to prevent cross-workspace access.

Applied to files:

  • internal/db/src/schema/workspaces.ts
  • apps/dashboard/lib/trpc/routers/workspace/create.ts
  • go/pkg/db/queries/workspace_update_k8s_namespace.sql
  • go/pkg/db/workspace_list.sql_generated.go
  • go/pkg/db/querier_generated.go
  • apps/api/src/pkg/testutil/harness.ts
  • go/pkg/db/workspace_update_k8s_namespace.sql_generated.go
  • go/pkg/db/key_find_live_by_id.sql_generated.go
  • go/pkg/db/key_find_live_by_hash.sql_generated.go
📚 Learning: 2024-10-04T20:44:38.489Z
Learnt from: chronark
Repo: unkeyed/unkey PR: 2180
File: apps/dashboard/lib/constants/workspace-navigations.tsx:56-118
Timestamp: 2024-10-04T20:44:38.489Z
Learning: When typing the `workspace` parameter in functions like `createWorkspaceNavigation`, prefer importing the `Workspace` type from the database module and picking the necessary keys (e.g., `features`) instead of redefining the interface.

Applied to files:

  • internal/db/src/schema/workspaces.ts
  • apps/dashboard/lib/trpc/routers/workspace/create.ts
  • go/pkg/db/workspace_find_by_id.sql_generated.go
  • go/pkg/db/workspace_list.sql_generated.go
  • go/pkg/db/key_find_live_by_hash.sql_generated.go
📚 Learning: 2025-09-23T17:39:59.820Z
Learnt from: perkinsjr
Repo: unkeyed/unkey PR: 4009
File: apps/dashboard/app/(app)/[workspace]/apis/[apiId]/_overview/components/table/components/override-indicator.tsx:88-97
Timestamp: 2025-09-23T17:39:59.820Z
Learning: The useWorkspaceNavigation hook in the Unkey dashboard guarantees that a workspace exists. If no workspace is found, the hook redirects the user to create a new workspace. Users cannot be logged in without a workspace, and new users must create one to continue. Therefore, workspace will never be null when using this hook.

Applied to files:

  • apps/dashboard/lib/trpc/routers/workspace/create.ts
📚 Learning: 2025-04-08T09:34:24.576Z
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 2872
File: apps/dashboard/lib/trpc/routers/ratelimit/createNamespace.ts:36-39
Timestamp: 2025-04-08T09:34:24.576Z
Learning: In the Unkey dashboard, when making database queries involving workspaces, use `ctx.workspace.id` directly instead of fetching the workspace separately for better performance and security.

Applied to files:

  • go/pkg/db/queries/workspace_update_k8s_namespace.sql
  • go/pkg/db/workspace_find_by_id.sql_generated.go
  • go/pkg/db/workspace_list.sql_generated.go
  • go/pkg/db/querier_generated.go
  • go/pkg/db/key_find_live_by_id.sql_generated.go
  • go/pkg/db/key_find_live_by_hash.sql_generated.go
📚 Learning: 2025-09-01T15:10:44.959Z
Learnt from: imeyer
Repo: unkeyed/unkey PR: 3899
File: go/proto/metald/v1/deployment.proto:7-15
Timestamp: 2025-09-01T15:10:44.959Z
Learning: In the unkey/unkey repository, the metald service receives deployment_id values from upstream services rather than generating them internally. The deployment_id field in DeploymentRequest is part of a service-to-service communication pattern.

Applied to files:

  • go/apps/krane/backend/kubernetes/deployment_create.go
  • go/apps/krane/backend/kubernetes/labels/labels.go
  • go/apps/krane/backend/kubernetes/gateway_create.go
  • go/apps/krane/backend/kubernetes/deployment_get.go
📚 Learning: 2025-10-15T10:12:40.810Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 4098
File: go/proto/ctrl/v1/deployment.proto:33-36
Timestamp: 2025-10-15T10:12:40.810Z
Learning: In the Unkey codebase proto files (ctrl/v1/build.proto, ctrl/v1/deployment.proto, hydra/v1/deployment.proto), use `dockerfile_path` (not `docker_file_path`) for consistency in generated Go field names.

Applied to files:

  • go/apps/krane/backend/kubernetes/deployment_create.go
  • go/apps/krane/backend/kubernetes/labels/labels.go
📚 Learning: 2025-09-11T08:17:21.423Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3944
File: go/apps/ctrl/services/deployment/fallbacks/k8s.go:236-247
Timestamp: 2025-09-11T08:17:21.423Z
Learning: The K8s fallback backend in go/apps/ctrl/services/deployment/fallbacks/k8s.go is specifically designed for scenarios where the ctrl service runs inside a Kubernetes cluster. It should not be used when ctrl runs outside the cluster - in those cases, the Docker fallback should be used instead.

Applied to files:

  • go/apps/krane/backend/kubernetes/deployment_create.go
📚 Learning: 2025-07-16T09:18:45.379Z
Learnt from: ogzhanolguncu
Repo: unkeyed/unkey PR: 3564
File: go/cmd/cli/commands/deploy/deploy.go:153-158
Timestamp: 2025-07-16T09:18:45.379Z
Learning: In the go/cmd/cli/commands/deploy/ CLI codebase, ogzhanolguncu prefers to allow deployment to continue even when Docker push fails (around lines 153-158 in deploy.go) because the team is working locally and needs this behavior for local development workflows where registry access might not be available.

Applied to files:

  • go/go.mod
📚 Learning: 2025-09-12T08:01:20.792Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3944
File: go/pkg/db/acme_challenge_update_verified_with_expiry.sql_generated.go:31-39
Timestamp: 2025-09-12T08:01:20.792Z
Learning: Do not review or suggest changes to files with sql_generated.go suffix or other files marked as auto-generated (containing "Code generated by" comments), as these are generated by tools like sqlc and changes would be overwritten on regeneration.

Applied to files:

  • go/pkg/db/workspace_update_k8s_namespace.sql_generated.go
📚 Learning: 2025-09-01T08:29:10.199Z
Learnt from: Flo4604
Repo: unkeyed/unkey PR: 3895
File: go/pkg/db/key_list_live_by_auth_id.sql_generated.go:98-105
Timestamp: 2025-09-01T08:29:10.199Z
Learning: In Unkey's ListLiveKeysByKeyAuthID query, adding ka.workspace_id = k.workspace_id constraint negatively impacts index performance. Workspace validation is handled upstream at the API level before the query is called, making the additional constraint unnecessary.

Applied to files:

  • go/pkg/db/key_find_live_by_id.sql_generated.go
  • go/pkg/db/key_find_live_by_hash.sql_generated.go
🧬 Code graph analysis (13)
go/pkg/db/models_generated.go (1)
go/pkg/db/types/null_string.go (1)
  • NullString (10-10)
go/pkg/uid/nanoid_test.go (1)
go/pkg/uid/nanoid.go (1)
  • Nano (49-67)
go/apps/krane/backend/kubernetes/deployment_create.go (1)
go/apps/krane/backend/kubernetes/labels/labels.go (2)
  • DeploymentID (5-5)
  • ManagedBy (8-8)
go/apps/krane/backend/kubernetes/gateway_create.go (1)
go/apps/krane/backend/kubernetes/labels/labels.go (2)
  • ManagedBy (8-8)
  • GatewayID (6-6)
go/pkg/db/workspace_list.sql_generated.go (1)
go/pkg/db/models_generated.go (1)
  • Workspace (1012-1031)
go/apps/krane/backend/kubernetes/gateway_get.go (1)
go/apps/krane/backend/kubernetes/labels/labels.go (2)
  • GatewayID (6-6)
  • ManagedBy (8-8)
go/apps/krane/backend/kubernetes/deployment_get.go (1)
go/apps/krane/backend/kubernetes/labels/labels.go (2)
  • DeploymentID (5-5)
  • ManagedBy (8-8)
go/pkg/db/querier_generated.go (2)
go/pkg/db/interface.go (1)
  • DBTX (29-34)
go/pkg/db/workspace_update_k8s_namespace.sql_generated.go (1)
  • UpdateWorkspaceK8sNamespaceParams (19-22)
go/apps/krane/backend/kubernetes/gateway_delete.go (2)
go/gen/proto/krane/v1/gateway.pb.go (6)
  • DeleteGatewayRequest (344-350)
  • DeleteGatewayRequest (363-363)
  • DeleteGatewayRequest (378-380)
  • DeleteGatewayResponse (396-400)
  • DeleteGatewayResponse (413-413)
  • DeleteGatewayResponse (428-430)
go/apps/krane/backend/kubernetes/labels/labels.go (2)
  • GatewayID (6-6)
  • ManagedBy (8-8)
go/apps/ctrl/workflows/project/create.go (9)
go/pkg/uid/nanoid.go (1)
  • Nano (49-67)
go/pkg/db/queries.go (1)
  • Query (29-29)
go/pkg/db/workspace_update_k8s_namespace.sql_generated.go (1)
  • UpdateWorkspaceK8sNamespaceParams (19-22)
go/pkg/db/types/null_string.go (1)
  • NullString (10-10)
go/apps/krane/backend/kubernetes/service.go (1)
  • New (39-58)
go/apps/ctrl/workflows/project/service.go (1)
  • New (31-38)
go/pkg/db/project_insert.sql_generated.go (1)
  • InsertProjectParams (29-39)
go/pkg/db/environment_insert.sql_generated.go (1)
  • InsertEnvironmentParams (28-37)
apps/dashboard/gen/proto/krane/v1/gateway_pb.ts (1)
  • GatewayRequest (21-56)
go/pkg/db/workspace_update_k8s_namespace.sql_generated.go (3)
go/pkg/db/types/null_string.go (1)
  • NullString (10-10)
go/pkg/db/queries.go (1)
  • Queries (3-3)
go/pkg/db/interface.go (1)
  • DBTX (29-34)
go/pkg/db/key_find_live_by_id.sql_generated.go (1)
go/pkg/db/models_generated.go (1)
  • Workspace (1012-1031)
go/pkg/db/key_find_live_by_hash.sql_generated.go (1)
go/pkg/db/models_generated.go (1)
  • Workspace (1012-1031)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Build / Build
  • GitHub Check: Test API / API Test Local
  • GitHub Check: Test Packages / Test
  • GitHub Check: Test Dashboard / Test Dashboard
  • GitHub Check: Test Go API Local / Test
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (24)
go/pkg/uid/nanoid.go (1)

1-67: LGTM! Clear security documentation.

The implementation is correct and the security warnings are appropriately prominent. The function uses math/rand/v2 which is suitable for internal identifiers like Kubernetes namespaces.

Note: With 8 characters from a 36-character alphabet (36^8 ≈ 2.8 trillion possibilities), collision probability becomes non-negligible after ~1-2 million workspaces due to the birthday paradox. The unique constraint on k8s_namespace in the database schema (line 211 of go/pkg/db/schema.sql) provides the necessary collision protection.

go/pkg/uid/nanoid_test.go (1)

1-91: LGTM! Comprehensive test coverage.

The tests cover all essential cases: default behavior, custom lengths, edge cases (zero length), and uniqueness smoke testing. The validation of character membership in nanoAlphabet ensures correctness.

apps/dashboard/lib/trpc/routers/workspace/create.ts (1)

84-84: LGTM! Correct initialization.

Initializing k8sNamespace to null is appropriate here. Based on the PR objectives, the namespace will be generated when a project is created within the workspace, not during workspace creation itself.

go/pkg/db/models_generated.go (1)

1018-1018: LGTM! Generated code is correct.

The K8sNamespace field addition is properly generated with the correct type (sql.NullString) and db tag. The positioning after PartitionID matches the schema definition.

go/pkg/db/schema.sql (1)

195-211: LGTM! Schema changes are well-designed.

The column definition is correct:

  • varchar(63) matches Kubernetes namespace naming limits (RFC 1123 DNS label)
  • Nullable allows assignment after workspace creation
  • UNIQUE constraint prevents namespace collisions, which is the core purpose of this PR
apps/api/src/pkg/testutil/harness.ts (2)

272-272: LGTM! Test fixture updated correctly.

The k8sNamespace: null initialization is consistent with the production workspace creation code and properly reflects the nullable nature of the field.


292-292: LGTM! Consistent initialization.

Both test workspaces correctly initialize k8sNamespace to null, maintaining consistency across test fixtures.

go/pkg/db/workspace_find_by_id.sql_generated.go (1)

13-44: LGTM! Generated query code is correct.

The k8s_namespace column is properly included in the SELECT statement and the row scanning logic. The field ordering matches the schema definition.

internal/db/src/schema/workspaces.ts (1)

31-32: LGTM! Kubernetes namespace column properly defined.

The varchar(63) length correctly aligns with Kubernetes namespace naming limits (RFC 1123 DNS label), and the unique constraint addresses the collision prevention goal stated in the PR objectives.

go/pkg/db/queries/workspace_update_k8s_namespace.sql (1)

1-5: Good idempotent update pattern.

The k8s_namespace is null condition ensures existing namespace assignments are never overwritten. Callers should check RowsAffected() from the sql.Result to distinguish between a successful update and a no-op when the namespace was already set.

go/pkg/db/key_find_live_by_hash.sql_generated.go (1)

18-18: Generated code correctly includes k8s_namespace.

The SELECT column addition and corresponding scan at line 286 are properly aligned with the updated Workspace struct.

go/pkg/db/workspace_list.sql_generated.go (1)

14-14: Generated code correctly propagates k8s_namespace.

The column addition is consistent across the SELECT statement, documentation comment, and scan operations.

go/pkg/db/key_find_live_by_id.sql_generated.go (1)

18-18: Generated code correctly includes k8s_namespace.

Consistent with the schema changes and other generated query files.

go/apps/krane/backend/kubernetes/gateway_delete.go (1)

15-31: Label-based deletion is a robust improvement.

The shift from name-based to label-based resource selection follows Kubernetes best practices and provides more reliable cleanup. The selector correctly combines GatewayID and ManagedBy labels for precise targeting.

go/apps/krane/backend/kubernetes/deployment_delete.go (1)

46-55: LGTM on the service deletion loop.

The error handling correctly ignores IsNotFound errors, which is appropriate since another process or garbage collection could have already deleted the resource. The per-resource logging provides good observability.

go/pkg/db/querier_generated.go (1)

2090-2095: LGTM - idempotent update pattern.

The WHERE id = ? and k8s_namespace is null condition ensures the namespace is only set once, preventing accidental overwrites. This is the correct pattern for a one-time migration/initialization of the new column.

go/apps/krane/backend/kubernetes/gateway_create.go (1)

57-63: Good use of GenerateName and centralized labels.

Using GenerateName instead of a fixed name avoids naming collisions when multiple gateways exist in a namespace. The label-based identification pattern with labels.GatewayID is a solid approach for resource management.

go/apps/krane/backend/kubernetes/gateway_get.go (2)

36-36: LGTM on the label selector construction.

Using fmt.Sprintf("%s=%s", labels.GatewayID, gatewayID) is consistent with the label constants defined in the labels package.


77-79: Appropriate error classification for missing service.

Returning CodeInternal when the deployment exists but the service doesn't is correct. This indicates an inconsistent state (the OwnerReference should have ensured the service exists), not that the gateway doesn't exist.

go/apps/krane/backend/kubernetes/deployment_create.go (1)

70-79: LGTM on the namespace and deploymentID extraction.

The refactoring to use namespace from the request and centralized labels for resource identification aligns well with the PR's objective of preventing namespace collisions.

go/pkg/db/workspace_update_k8s_namespace.sql_generated.go (1)

1-31: Skipping review of auto-generated file.

This file is generated by sqlc and should not be manually modified. Based on learnings, files with sql_generated.go suffix are excluded from review.

go/apps/ctrl/workflows/project/create.go (2)

139-156: LGTM on gateway provisioning with k8sNamespace.

The gateway creation correctly uses the workspace's k8sNamespace instead of the workspace ID, which aligns with the PR objective of preventing namespace collisions.


47-47: No issue found: uid.Nano produces Kubernetes-valid namespace names.

The nanoAlphabet constant (line 10 of go/pkg/uid/nanoid.go) is defined as "abcdefghijklmnopqrstuvwxyz0123456789" — containing only lowercase letters and digits. This generates 12-character strings that fully comply with Kubernetes namespace requirements (lowercase alphanumeric, max 63 characters). The original concern about invalid characters is unfounded.

go/go.mod (1)

20-20: Dependency updates appear incidental to this PR, but verify restatedev/sdk-go v0.22.0 compatibility.

These go.mod changes were pulled in from the main branch merge and aren't directly related to the k8s_namespace column feature. Docker/docker v28.5.1 is safe—it contains no breaking Go API changes, only Go runtime and field deprecations. However, restatedev/sdk-go v0.22.0 could not be verified via web search (no public release notes found), and it is actively used in your protobuf-generated Restate integration code. Before merging, confirm that v0.22.0 is compatible with your Restate setup and doesn't introduce breaking changes to the encoding or ingress modules used in go/gen/proto/hydra/v1/*_restate.pb.go. The golang.org/x and OpenTelemetry updates are routine maintenance and safe.


namespace := req.Msg.GetDeployment().GetNamespace()
deploymentID := req.Msg.GetDeployment().GetDeploymentId()
const krane = "krane"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Extract krane constant to package level for reuse.

This constant is duplicated/missing in deployment_get.go. Based on the AI summary, there should be a shared constant in go/apps/krane/backend/kubernetes/krane.go. Consider using that or moving this to package level.

-func (k *k8s) CreateDeployment(...) {
-	const krane = "krane"
+const krane = "krane"
+
+func (k *k8s) CreateDeployment(...) {

Or import from the shared location if krane.go exports it:

#!/bin/bash
# Check if krane constant exists in krane.go
cat go/apps/krane/backend/kubernetes/krane.go 2>/dev/null || echo "File not found"
🤖 Prompt for AI Agents
In go/apps/krane/backend/kubernetes/deployment_create.go around line 74, the
local const `krane` is duplicated elsewhere; extract it to the package level or
reuse the shared constant in go/apps/krane/backend/kubernetes/krane.go: either
move the const declaration out of the function to the top of the package (so
other files can reference it) or, if krane.go already defines and exports the
constant, import/use that exported name instead; update deployment_create.go to
reference the package-level constant and remove the local duplicate, and adjust
visibility/name (exported vs unexported) in krane.go as needed so all files
compile.

Comment on lines 172 to 175
Containers: []corev1.Container{
{
Name: "todo",

Image: req.Msg.GetDeployment().GetImage(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Missing required Name field for container.

Kubernetes requires every container to have a Name field. Without it, the StatefulSet creation will fail with a validation error.

 Containers: []corev1.Container{
 	{
-
+		Name:  "app",
 		Image: req.Msg.GetDeployment().GetImage(),
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
Containers: []corev1.Container{
{
Name: "todo",
Image: req.Msg.GetDeployment().GetImage(),
Containers: []corev1.Container{
{
Name: "app",
Image: req.Msg.GetDeployment().GetImage(),
🤖 Prompt for AI Agents
In go/apps/krane/backend/kubernetes/deployment_create.go around lines 172-175,
the corev1.Container literal is missing the required Name field which will cause
Kubernetes validation to fail; set the container Name to a valid DNS-1123 label
derived from the deployment name (e.g. sanitize and lower-case
req.Msg.GetDeployment().GetName() and append "-container"), with a safe fallback
like "container" if the deployment name is empty, and ensure the resulting
string conforms to Kubernetes naming rules before assigning it to the
Container.Name field.

Comment on lines +29 to +30
// Create label selector for this deployment
labelSelector := fmt.Sprintf("deployment-id=%s", deploymentID)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Use the centralized label constant for consistency.

The label selector uses the hard-coded string "deployment-id" but the new labels package defines DeploymentID = "unkey.com/deployment.id". This inconsistency means resources created with the new label scheme won't be found during deletion.

+import "github.com/unkeyed/unkey/go/apps/krane/backend/kubernetes/labels"
+
 // Create label selector for this deployment
-labelSelector := fmt.Sprintf("deployment-id=%s", deploymentID)
+labelSelector := fmt.Sprintf("%s=%s", labels.DeploymentID, deploymentID)

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In go/apps/krane/backend/kubernetes/deployment_delete.go around lines 29-30, the
label selector is built using the hard-coded "deployment-id" string which is
inconsistent with the centralized labels package; replace the hard-coded literal
with the labels.DeploymentID constant (e.g. fmt.Sprintf("%s=%s",
labels.DeploymentID, deploymentID)) and add the labels package to the file's
import list so deletion uses the same label key as resource creation.


k.logger.Info("getting deployment", "deployment_id", k8sDeploymentID)
// Create label selector for this deployment
labelSelector := fmt.Sprintf("%s=%s,%s=%s", labels.DeploymentID, deploymentID, labels.ManagedBy, krane)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Undefined variable krane will cause compilation failure.

The variable krane is used here but is not declared in this file. In deployment_create.go, it's defined as a local constant inside the function (const krane = "krane"), making it inaccessible here.

Either import from a shared location or define the constant in this file:

+const krane = "krane"
+
 func (k *k8s) GetDeployment(ctx context.Context, req *connect.Request[kranev1.GetDeploymentRequest]) (*connect.Response[kranev1.GetDeploymentResponse], error) {

Alternatively, verify if there's a shared constant in go/apps/krane/backend/kubernetes/krane.go that should be used here.

#!/bin/bash
# Check if krane constant is defined elsewhere
rg -n "const krane" --type=go
ast-grep --pattern 'const krane = $_'
🤖 Prompt for AI Agents
In go/apps/krane/backend/kubernetes/deployment_get.go around line 36 the
identifier `krane` is undefined and will cause a compile error; fix by either
adding a file-scoped constant `const krane = "krane"` near the top of this file
or (preferred) import and use the shared constant from
go/apps/krane/backend/kubernetes/krane.go (e.g., replace `krane` with the
exported constant name), then rebuild to ensure the labelSelector line compiles.

Comment on lines +38 to +76
// List and delete Services with this gateway-id label
//nolint: exhaustruct
serviceList, err := k.clientset.CoreV1().Services(namespace).List(ctx, metav1.ListOptions{
LabelSelector: labelSelector,
})
if err != nil && !apierrors.IsNotFound(err) {
return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to delete deployment: %w", err))
if err != nil {
return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to list services: %w", err))
}

for _, service := range serviceList.Items {
k.logger.Debug("deleting service",
"name", service.Name,
"gateway_id", gatewayID,
)
err = k.clientset.CoreV1().Services(namespace).Delete(ctx, service.Name, deleteOptions)
if err != nil && !apierrors.IsNotFound(err) {
return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to delete service %s: %w", service.Name, err))
}
}

// List and delete Deployments with this gateway-id label
//nolint: exhaustruct
deploymentList, err := k.clientset.AppsV1().Deployments(namespace).List(ctx, metav1.ListOptions{
LabelSelector: labelSelector,
})
if err != nil {
return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to list deployments: %w", err))
}

for _, deployment := range deploymentList.Items {
k.logger.Debug("deleting deployment",
"name", deployment.Name,
"gateway_id", gatewayID,
)
err = k.clientset.AppsV1().Deployments(namespace).Delete(ctx, deployment.Name, deleteOptions)
if err != nil && !apierrors.IsNotFound(err) {
return nil, connect.NewError(connect.CodeInternal, fmt.Errorf("failed to delete deployment %s: %w", deployment.Name, err))
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for Kubernetes resource types created during gateway creation
rg -n "clientset\.(CoreV1|AppsV1|NetworkingV1|RbacV1)" go/apps/krane/backend/kubernetes/gateway_create.go -A 3 -B 1

Repository: unkeyed/unkey

Length of output: 1329


🏁 Script executed:

#!/bin/bash
# Look for any Create calls in gateway_create.go
rg -n "\.Create\(" go/apps/krane/backend/kubernetes/gateway_create.go -B 2 -A 2

Repository: unkeyed/unkey

Length of output: 482


🏁 Script executed:

#!/bin/bash
# Get the complete gateway_delete.go to understand full deletion logic
wc -l go/apps/krane/backend/kubernetes/gateway_delete.go && cat -n go/apps/krane/backend/kubernetes/gateway_delete.go

Repository: unkeyed/unkey

Length of output: 3756


🏁 Script executed:

#!/bin/bash
# Get the full gateway_create.go to see all resources created
wc -l go/apps/krane/backend/kubernetes/gateway_create.go && cat -n go/apps/krane/backend/kubernetes/gateway_create.go

Repository: unkeyed/unkey

Length of output: 7556


Delete the Namespace resource to prevent resource leaks.

The deletion logic is missing cleanup of the Namespace resource, which is conditionally created during gateway creation (line 35-46 in gateway_create.go). The namespace has the managed-by: krane label and will accumulate indefinitely without deletion. Regarding ConfigMaps, Secrets, Ingress, and ServiceAccounts: these resource types are not created during gateway provisioning, so the current deletion of Services and Deployments is complete for all created resources.

Add namespace deletion to DeleteGateway similar to the existing Service and Deployment deletion patterns, using the namespace name directly rather than label selectors.

🤖 Prompt for AI Agents
In go/apps/krane/backend/kubernetes/gateway_delete.go around lines 38 to 76, add
explicit Namespace deletion to avoid leaks: after deleting Services and
Deployments call k.clientset.CoreV1().Namespaces().Delete(ctx, namespace,
deleteOptions) (use the namespace string directly, not a label selector), log
the deletion attempt with k.logger.Debug("deleting namespace", "name",
namespace, "gateway_id", gatewayID), and handle errors the same way as other
deletes—ignore apierrors.IsNotFound(err) but return
connect.NewError(connect.CodeInternal, fmt.Errorf("failed to delete namespace
%s: %w", namespace, err)) for other errors.

Comment on lines +1 to +9
package labels

// Labels
const (
DeploymentID = "unkey.com/deployment.id"
GatewayID = "unkey.com/gateway.id"
GatewayVersion = "unkey.com/gateway.version"
ManagedBy = "app.kubernetes.io/managed-by"
)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Search for usages of GatewayVersion label constant
rg -n "GatewayVersion" --type go -C 3

Repository: unkeyed/unkey

Length of output: 558


GatewayVersion is unused and should be removed or deployed.

Verification confirms that the GatewayVersion constant is defined but never referenced anywhere in the codebase. Either use this constant in the deployment logic or remove it to avoid maintaining dead code.

🤖 Prompt for AI Agents
In go/apps/krane/backend/kubernetes/labels/labels.go around lines 1 to 9, the
GatewayVersion constant is defined but never used; remove the unused constant or
reference it where gateway image/version labels are set. To fix: either delete
the GatewayVersion line from the const block to eliminate dead code, or update
the deployment/manifest-labeling code (where other labels like GatewayID or
DeploymentID are applied) to use labels.GatewayVersion when populating gateway
version metadata and ensure imports compile. Make the minimal change consistent
with existing label usage and run `go vet`/`go build` to confirm no unused
symbol remains.

@chronark chronark closed this Jan 4, 2026
@Flo4604 Flo4604 deleted the 11-28-fix_use_a_new_k8s_namespace_column_and_use_label_based_lookups branch January 22, 2026 14:05
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants