Skip to content

feat: run deploy locally#3959

Merged
chronark merged 4 commits intomainfrom
feat/run-deploy-locally
Sep 15, 2025
Merged

feat: run deploy locally#3959
chronark merged 4 commits intomainfrom
feat/run-deploy-locally

Conversation

@Flo4604
Copy link
Member

@Flo4604 Flo4604 commented Sep 12, 2025

What does this PR do?

Fixes # (issue)

If there is not an issue for this, please create one first. This is used to tracking purposes and also helps use understand why this PR exists

Type of change

  • Bug fix (non-breaking change which fixes an issue)
  • Chore (refactoring code, technical debt, workflow improvements)
  • Enhancement (small improvements)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

How should this be tested?

  • Test A
  • Test B

Checklist

Required

  • Filled out the "How to test" section in this PR
  • Read Contributing Guide
  • Self-reviewed my own code
  • Commented on my code in hard-to-understand areas
  • Ran pnpm build
  • Ran pnpm fmt
  • Checked for warnings, there are none
  • Removed all console.logs
  • Merged the latest changes from main onto my branch with git pull origin main
  • My changes don't cause any responsiveness issues

Appreciated

  • If a UI change was made: Added a screen recording or screenshots to this PR
  • Updated the Unkey Docs if changes were necessary

Summary by CodeRabbit

  • New Features

    • Local TLS support with optional auto-generated self-signed certificates for *.unkey.local.
    • Wildcard DNS setup script to resolve *.unkey.local to 127.0.0.1.
    • New CLI flags to enable local certs and indicate the service is running in Docker.
    • Added dashboard and observability stack (OTEL, Prometheus) and a scaled API v2 behind a load balancer.
  • Chores

    • Updated deployment configuration to expose 80/443, enable TLS defaults, add services, and streamline environment keys.
    • Added certs/ to .gitignore.

@changeset-bot
Copy link

changeset-bot bot commented Sep 12, 2025

⚠️ No Changeset found

Latest commit: 6e6adf0

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@vercel
Copy link

vercel bot commented Sep 12, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

2 Skipped Deployments
Project Deployment Preview Comments Updated (UTC)
dashboard Ignored Ignored Preview Sep 15, 2025 9:32am
engineering Ignored Ignored Preview Sep 15, 2025 9:32am

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 12, 2025

📝 Walkthrough

Walkthrough

Introduces local TLS certificate generation in the gateway, adds a docker-running flag to ctrl, plumbs a “running in Docker” signal through deployment backends, enhances Docker host/port resolution, adds TTL/Job support to the K8s backend, extends deploy workflow config, and updates docker-compose with new services, TLS settings, and a local DNS helper script. Also ignores certs/ in Git.

Changes

Cohort / File(s) Summary
Deployment config & DNS
deployment/docker-compose.yaml, deployment/setup-wildcard-dns.sh, .gitignore
Adds many services (planetscale, apiv2_lb, apiv2 (replicated), api, otel, prometheus, dashboard, agent), updates gw to expose 80/443, TLS envs and ./certs:/certs volume, adjusts ctrl envs and mounts Docker socket, standardizes DB connection params (adds interpolateParams), adds Redis/Vault S3 URL and acme-vault bucket, and adds deployment/setup-wildcard-dns.sh. Adds certs/ to .gitignore.
Ctrl CLI & config
go/cmd/ctrl/main.go, go/apps/ctrl/config.go
Adds CLI flag --docker-running (env UNKEY_DOCKER_RUNNING) and new IsRunningDocker bool field on ctrl Config to convey runtime Docker context.
Deployment plumbing & adapters
go/apps/ctrl/services/deployment/backend_adapter.go, go/apps/ctrl/services/deployment/backends/interface.go
Propagates isRunningDocker flag into backend factory and adapters; updates NewBackend/NewLocalBackend/NewDeploymentBackend signatures to accept the boolean and pass it through.
Docker backend
go/apps/ctrl/services/deployment/backends/docker.go
Adds isRunningDocker field to DockerBackend; NewDockerBackend now accepts the flag. GetDeploymentStatus prefers container IP when available; otherwise uses host.docker.internal if running in Docker, or localhost, and derives external port from PortBindings.
Kubernetes backend
go/apps/ctrl/services/deployment/backends/k8s.go
Adds TTL-based lifecycle: ttlSeconds on K8sBackend; when >0 creates BatchV1 Job with TTLSecondsAfterFinished/ActiveDeadlineSeconds and tracks per-deployment JobName/UseJob. Supports Job-based status mapping and deletes appropriate resource on removal; preserves non-TTL Deployment path.
Deploy workflow
go/apps/ctrl/services/deployment/deploy_workflow.go
Adds IsRunningDocker to DeployWorkflowConfig, passes it to NewDeploymentBackend, updates isLocalHostname signature to accept defaultDomain and broaden local-domain detection (.local and .test).
Gateway local certs
go/apps/gw/config.go, go/apps/gw/localcert.go, go/apps/gw/run.go
Adds RequireLocalCert to gateway config and CLI flag; implements generateLocalCertificate to check DB, create a 4096-bit self-signed wildcard cert, write PEM files under certs/, encrypt private key via Vault and store encrypted key + cert in DB, and prints trust-install instructions. Run invokes generation when enabled.
Gateway CLI
go/cmd/gw/main.go
Adds --require-local-cert (env UNKEY_REQUIRE_LOCAL_CERT) flag and wires it into gw.Config.RequireLocalCert.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  participant User
  participant GW as Gateway
  participant Vault
  participant PDB as PartitionedDB
  participant FS as Filesystem

  Note over GW: Startup with RequireLocalCert=true
  User->>GW: Start
  GW->>PDB: Lookup cert for *.unkey.local
  alt Found valid cert
    GW-->>User: Continue startup
  else Not found
    GW->>GW: Generate RSA key + self-signed cert
    GW->>FS: Write unkey.local.crt/.key
    GW->>Vault: Encrypt private key
    Vault-->>GW: Encrypted blob
    GW->>PDB: Upsert cert + encrypted key
    GW-->>User: Continue startup
  end
Loading
sequenceDiagram
  autonumber
  participant CTRL as ctrl
  participant WF as DeployWorkflow
  participant BSel as BackendFactory
  participant Dkr as DockerBackend
  participant K8s as K8sBackend

  Note over CTRL: CLI flag --docker-running
  CTRL->>WF: NewDeployWorkflow(IsRunningDocker)
  WF->>BSel: NewDeploymentBackend(..., isRunningDocker)
  alt Backend = docker
    BSel->>Dkr: NewDockerBackend(isRunningDocker)
    Dkr->>Dkr: GetDeploymentStatus
    alt Container IP available
      Dkr-->>WF: host=containerIP, port=8080
    else No container IP
      alt isRunningDocker
        Dkr-->>WF: host=host.docker.internal, port=externalBinding
      else not running in Docker
        Dkr-->>WF: host=localhost, port=externalBinding
      end
    end
  else Backend = k8s
    BSel->>K8s: NewK8sBackend(ttlSeconds)
    alt ttlSeconds>0
      WF->>K8s: Create Job (TTL)
      K8s-->>WF: Track UseJob=true
    else
      WF->>K8s: Create Deployment
      K8s-->>WF: Track UseJob=false
    end
  end
  WF-->>CTRL: Backend ready
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60–90 minutes

Possibly related PRs

Pre-merge checks and finishing touches

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description Check ⚠️ Warning The PR description is essentially the unmodified repository template with placeholder entries and missing content: there is no linked issue, no selected change type, the "How should this be tested?" section contains only placeholders, and none of the checklist items are completed; given the breadth of changes (new docker-compose topology, setup script, local cert generation, new flags and backend behavior) this is insufficient for review or testing. Because critical context, test instructions, and checklist verification are absent, the description does not meet the repository's template requirements. Update the PR description before review: add a concise summary and motivation and link (or create) the related issue and select the appropriate "Type of change"; populate "How should this be tested?" with concrete, reproducible steps and commands covering docker-compose startup, running deployment flows, setup-wildcard-dns.sh usage, and verifying local certificate generation and service endpoints; run and confirm pnpm build/pnpm fmt, remove any console.logs, check the required checklist boxes, and note any required environment variables or migration/security considerations (e.g., UNKEY_REQUIRE_LOCAL_CERT, UNKEY_DOCKER_RUNNING, Vault/S3 changes) so reviewers can assess risk.
Docstring Coverage ⚠️ Warning Docstring coverage is 75.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title Check ✅ Passed The title "feat: run deploy locally" is concise and accurately summarizes the primary intent of the changeset — enabling local deployments — which aligns with the large additions to docker-compose, local DNS/script, and runtime flags for local/docker execution described in the diff. It is specific enough for a reviewer scanning history to understand the primary change without extraneous detail.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/run-deploy-locally

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@Flo4604 Flo4604 marked this pull request as ready for review September 15, 2025 08:41
@github-actions
Copy link
Contributor

github-actions bot commented Sep 15, 2025

Thank you for following the naming conventions for pull request titles! 🙏

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 20

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (13)
go/apps/ctrl/services/deployment/backends/interface.go (1)

52-66: Signature change: avoid boolean trap — use options struct or document the param

Only call-site found: go/apps/ctrl/services/deployment/backend_adapter.go:64 (NewLocalBackendAdapter forwards isRunningDocker).

Either:

  • Add a short doc comment on isRunningDocker (minimal, non-breaking):
-// NewBackend creates a new deployment backend based on the specified type
-func NewBackend(backendType string, logger logging.Logger, isRunningDocker bool) (DeploymentBackend, error) {
+// NewBackend creates a new deployment backend.
+// isRunningDocker indicates whether ctrl itself is running inside a Docker network (affects host/port resolution).
+func NewBackend(backendType string, logger logging.Logger, isRunningDocker bool) (DeploymentBackend, error) {
  • Or adopt an options struct and update the single caller:
type BackendOpts struct {
  IsRunningDocker bool
}

func NewBackend(backendType string, logger logging.Logger, opts BackendOpts) (DeploymentBackend, error) {
  // ...
  case BackendTypeDocker:
    return NewDockerBackend(logger, opts.IsRunningDocker)
}

Update go/apps/ctrl/services/deployment/backend_adapter.go:64 to forward opts accordingly.

deployment/docker-compose.yaml (1)

151-158: Fix MinIO console port mapping (dev-only).

Console is configured for 3903 but mapped as 2903. Use 3903:3903 to match MINIO_CONSOLE_PORT_NUMBER. Prior learning also flagged this.

Apply this diff:

   ports:
     - 3902:3902
-    - 2903:2903
+    - 3903:3903
   environment:
     MINIO_ROOT_USER: minio_root_user
     MINIO_ROOT_PASSWORD: minio_root_password
     MINIO_API_PORT_NUMBER: 3902
     MINIO_CONSOLE_PORT_NUMBER: 3903
go/apps/ctrl/services/deployment/backends/docker.go (2)

85-111: Non-compiling loop over vmCount.

for i := range vmCount is invalid; vmCount is a number, not iterable.

Apply this diff:

-  // Create containers
-  for i := range vmCount {
+  // Create containers
+  for i := 0; i < int(vmCount); i++ {
     vmID := uid.New("vm")
     containerName := fmt.Sprintf("unkey-%s-%s", deploymentID, vmID)

154-193: Host/port resolution can be wrong when ctrl runs outside Docker.

You always prefer container IPs if present; on macOS/Windows that IP isn’t reachable from the host. Use container IPs only when isRunningDocker; otherwise prefer mapped host ports. Also match the exact "8080/tcp" port key instead of contains().

Apply this diff:

-    // Get host and port from container
-    var host string
-    port := int32(8080) // Container internal port
-
-    // Always use container IP when available (works from inside and outside Docker)
-    if inspect.NetworkSettings != nil {
-      // Try legacy IPAddress field first
-      if inspect.NetworkSettings.IPAddress != "" {
-        host = inspect.NetworkSettings.IPAddress
-        port = int32(8080)
-      } else if inspect.NetworkSettings.Networks != nil {
-        // Try to get IP from any network
-        for _, network := range inspect.NetworkSettings.Networks {
-          if network.IPAddress != "" {
-            host = network.IPAddress
-            port = int32(8080)
-            break
-          }
-        }
-      }
-    }
-
-    // If no container IP found, fall back to external access
-    if host == "" {
-      // Fallback: use external access method
-      if d.isRunningDocker {
-        host = "host.docker.internal"
-      } else {
-        host = "localhost"
-      }
-      // Get the mapped port for external access
-      if inspect.NetworkSettings != nil && inspect.NetworkSettings.Ports != nil {
-        for containerPort, bindings := range inspect.NetworkSettings.Ports {
-          if strings.Contains(string(containerPort), "8080") && len(bindings) > 0 {
-            if p, err := strconv.Atoi(bindings[0].HostPort); err == nil {
-              port = int32(p)
-            }
-          }
-        }
-      }
-    }
+    // Resolve host/port depending on where ctrl runs
+    var host string
+    port := int32(8080) // container internal port
+
+    // Helper: exact mapped port lookup
+    getMapped := func() int32 {
+      if inspect.NetworkSettings != nil && inspect.NetworkSettings.Ports != nil {
+        if bindings, ok := inspect.NetworkSettings.Ports[nat.Port("8080/tcp")]; ok && len(bindings) > 0 {
+          if p, err := strconv.Atoi(bindings[0].HostPort); err == nil {
+            return int32(p)
+          }
+        }
+      }
+      return 0
+    }
+
+    // Helper: find container IP (any network)
+    getContainerIP := func() string {
+      if ns := inspect.NetworkSettings; ns != nil {
+        if ns.IPAddress != "" {
+          return ns.IPAddress
+        }
+        for _, n := range ns.Networks {
+          if n.IPAddress != "" {
+            return n.IPAddress
+          }
+        }
+      }
+      return ""
+    }
+
+    if d.isRunningDocker {
+      // Inside Docker: prefer container IP (same bridge network)
+      if ip := getContainerIP(); ip != "" {
+        host = ip
+        port = int32(8080)
+      } else if mp := getMapped(); mp != 0 {
+        // Rare fallback
+        host = "host.docker.internal"
+        port = mp
+      } else {
+        host = "host.docker.internal"
+      }
+    } else {
+      // Outside Docker: prefer mapped port on localhost for cross‑platform reachability
+      host = "localhost"
+      if mp := getMapped(); mp != 0 {
+        port = mp
+      }
+    }
go/apps/ctrl/services/deployment/deploy_workflow.go (5)

169-177: Non-compiling loop: cannot range over an int.

for i := range 300 { does not compile in Go. Use a counted loop and respect context cancellation.

Apply:

-		for i := range 300 {
-			time.Sleep(time.Second)
+		for i := 0; i < 300; i++ {
+			select {
+			case <-stepCtx.Done():
+				return nil, stepCtx.Err()
+			case <-time.After(time.Second):
+			}
 			if i%10 == 0 { // Log every 10 seconds instead of every second

191-211: VM status persisted as “running” regardless of actual state.

Don’t write RUNNING when the VM isn’t. Map proto state to DB status.

Apply:

-					upsertParams := partitiondb.UpsertVMParams{
+					vmStatus := partitiondb.VmsStatusCreated
+					switch instance.State {
+					case metaldv1.VmState_VM_STATE_RUNNING:
+						vmStatus = partitiondb.VmsStatusRunning
+					case metaldv1.VmState_VM_STATE_SHUTDOWN:
+						vmStatus = partitiondb.VmsStatusShutdown
+					}
+					upsertParams := partitiondb.UpsertVMParams{
 						ID:            instance.Id,
 						DeploymentID:  req.DeploymentID,
 						Address:       sql.NullString{Valid: true, String: fmt.Sprintf("%s:%d", instance.Host, instance.Port)},
 						CpuMillicores: 1000,                         // TODO derive from spec
 						MemoryMb:      1024,                         // TODO derive from spec
-						Status:        partitiondb.VmsStatusRunning, // TODO
+						Status:        vmStatus,
 					}

291-309: Domain type set to Custom for auto-generated domains.

If your schema supports an “auto/generated” type, set it accordingly so UX/routing can distinguish.

Example (adjust enum names to your schema):

-				Type:         db.DomainsTypeCustom,
+				Type: func() db.DomainsType {
+					if strings.HasSuffix(domain, "."+w.defaultDomain) || domain == w.defaultDomain {
+						return db.DomainsTypeAuto // VERIFY actual enum
+					}
+					return db.DomainsTypeCustom
+				}(),

330-359: Skip-local check is good; normalize input first.

If req.Hostname ever includes scheme/port (e.g., http://localhost:3000), current check will misclassify as non-local and create gateway configs.

Apply normalization before the loop:

-		for _, domain := range allDomains {
+		for _, raw := range allDomains {
+			domain := normalizeHostname(raw)
 			if isLocalHostname(domain, w.defaultDomain) {

And add (anywhere in this file):

// normalizeHostname strips scheme and port.
func normalizeHostname(h string) string {
	h = strings.TrimSpace(strings.ToLower(h))
	if u, err := url.Parse(h); err == nil && u.Host != "" {
		h = u.Host
	}
	if host, _, err := net.SplitHostPort(h); err == nil {
		h = host
	}
	return strings.TrimSuffix(h, ".")
}

Remember to import net and net/url.


599-631: Local hostname detection misses loopback ranges and Docker hostnames; add robust checks.

Covers localhost/127.0.0.1 only; misses ::1, 127.0.0.0/8, host.docker.internal, gateway.docker.internal, kubernetes.docker.internal, and private IPs used in local setups.

Apply:

-func isLocalHostname(hostname, defaultDomain string) bool {
+func isLocalHostname(hostname, defaultDomain string) bool {
 	// Lowercase for case-insensitive comparison
 	hostname = strings.ToLower(hostname)
 	defaultDomain = strings.ToLower(defaultDomain)
 
 	// Exact matches for common local hosts - these should be skipped
-	if hostname == "localhost" || hostname == "127.0.0.1" {
+	if hostname == "localhost" || hostname == "127.0.0.1" || hostname == "::1" ||
+		hostname == "host.docker.internal" || hostname == "gateway.docker.internal" ||
+		hostname == "kubernetes.docker.internal" {
 		return true
 	}
 
 	// If hostname uses the default domain, it should NOT be skipped (return false)
 	// This allows gateway configs to be created for the default domain
 	if strings.HasSuffix(hostname, "."+defaultDomain) || hostname == defaultDomain {
 		return false
 	}
 
 	// Check for local-only TLD suffixes - these should be skipped
 	// Note: .dev is a real TLD owned by Google, so it's excluded
 	localSuffixes := []string{
 		".local",
 		".test",
 	}
 	for _, suffix := range localSuffixes {
 		if strings.HasSuffix(hostname, suffix) {
 			return true
 		}
 	}
+	// IP-based checks
+	if ip := net.ParseIP(hostname); ip != nil {
+		if ip.IsLoopback() || ip.IsPrivate() {
+			return true
+		}
+	}
 	return false
 }

Add imports: net.

go/apps/ctrl/services/deployment/backends/k8s.go (4)

86-90: Non-compiling slice/loop with uint32.

make([]string, vmCount) and for i := range vmCount don’t compile. Cast to int and use a counted loop.

Apply:

-	vmIDs := make([]string, vmCount)
-	for i := range vmCount {
+	vmIDs := make([]string, int(vmCount))
+	for i := 0; i < int(vmCount); i++ {
 		vmIDs[i] = uid.New("vm")
 	}

213-239: Service should be owned by created controller to avoid leaks.

If a TTL’ed Job is GC’d, the NodePort Service will remain orphaned. Set OwnerReferences to Job/Deployment.

Sketch:

-	_, err := k.clientset.BatchV1().Jobs(k.namespace).Create(ctx, job, metav1.CreateOptions{})
+	createdJob, err := k.clientset.BatchV1().Jobs(k.namespace).Create(ctx, job, metav1.CreateOptions{})
...
-	service := &corev1.Service{
+	ownerRefs := []metav1.OwnerReference{*metav1.NewControllerRef(createdJob, batchv1.SchemeGroupVersion.WithKind("Job"))}
+	service := &corev1.Service{
 		ObjectMeta: metav1.ObjectMeta{
 			Name:      serviceName,
 			Namespace: k.namespace,
+			OwnerReferences: ownerRefs,

Analogous for Deployment path. Also consider ServiceType ClusterIP unless NodePort is truly required.


391-438: Undefined min() causes compile error.

min isn’t defined.

Apply:

 	// Use the smaller limit to ensure both names are valid
-	maxCore := min(maxDeploymentCore, maxServiceCore)
+	maxCore := func(a, b int) int {
+		if a < b {
+			return a
+		}
+		return b
+	}(maxDeploymentCore, maxServiceCore)

Or add a small min(int,int) int helper.


395-403: Regex compilation per call.

Precompile at package scope to avoid repeated allocations.

-	invalidCharsRegex := regexp.MustCompile(`[^a-z0-9-]+`)
+	// package-level: var invalidCharsRegex = regexp.MustCompile(`[^a-z0-9-]+`)

Same for multiHyphenRegex.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 10f390e and 312a280.

📒 Files selected for processing (13)
  • .gitignore (1 hunks)
  • deployment/docker-compose.yaml (12 hunks)
  • deployment/setup-wildcard-dns.sh (1 hunks)
  • go/apps/ctrl/services/deployment/backend_adapter.go (2 hunks)
  • go/apps/ctrl/services/deployment/backends/docker.go (4 hunks)
  • go/apps/ctrl/services/deployment/backends/interface.go (2 hunks)
  • go/apps/ctrl/services/deployment/backends/k8s.go (8 hunks)
  • go/apps/ctrl/services/deployment/deploy_workflow.go (3 hunks)
  • go/apps/gw/config.go (1 hunks)
  • go/apps/gw/localcert.go (1 hunks)
  • go/apps/gw/run.go (1 hunks)
  • go/cmd/ctrl/main.go (1 hunks)
  • go/cmd/gw/main.go (2 hunks)
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-09-11T08:17:21.409Z
Learnt from: Flo4604
PR: unkeyed/unkey#3944
File: go/apps/ctrl/services/deployment/fallbacks/k8s.go:236-247
Timestamp: 2025-09-11T08:17:21.409Z
Learning: The K8s fallback backend in go/apps/ctrl/services/deployment/fallbacks/k8s.go is specifically designed for scenarios where the ctrl service runs inside a Kubernetes cluster. It should not be used when ctrl runs outside the cluster - in those cases, the Docker fallback should be used instead.

Applied to files:

  • go/apps/ctrl/services/deployment/backends/docker.go
  • go/apps/ctrl/services/deployment/backend_adapter.go
  • go/apps/ctrl/services/deployment/backends/interface.go
📚 Learning: 2025-08-08T14:55:11.981Z
Learnt from: imeyer
PR: unkeyed/unkey#3755
File: deployment/docker-compose.yaml:179-184
Timestamp: 2025-08-08T14:55:11.981Z
Learning: In deployment/docker-compose.yaml (development only), MinIO is configured with API on 3902 and console on 3903; ports should map 3902:3902 and 3903:3903 to match MINIO_API_PORT_NUMBER and MINIO_CONSOLE_PORT_NUMBER.

Applied to files:

  • deployment/docker-compose.yaml
🧬 Code graph analysis (9)
go/cmd/gw/main.go (1)
go/pkg/cli/flag.go (1)
  • Bool (411-447)
go/apps/ctrl/services/deployment/backends/docker.go (1)
go/pkg/otel/logging/interface.go (1)
  • Logger (11-116)
go/apps/gw/run.go (1)
go/apps/gw/localcert.go (1)
  • LocalCertConfig (23-29)
go/apps/ctrl/services/deployment/backend_adapter.go (1)
go/apps/ctrl/services/deployment/backends/interface.go (2)
  • NewBackend (52-69)
  • DeploymentBackend (19-31)
go/apps/gw/localcert.go (4)
go/pkg/otel/logging/interface.go (1)
  • Logger (11-116)
go/pkg/db/handle_err_no_rows.go (1)
  • IsNotFound (8-10)
go/pkg/partition/db/models_generated.go (1)
  • Certificate (104-112)
go/pkg/partition/db/certificate_insert.sql_generated.go (1)
  • InsertCertificateParams (23-30)
go/cmd/ctrl/main.go (1)
go/pkg/cli/flag.go (2)
  • Bool (411-447)
  • EnvVar (287-304)
go/apps/ctrl/services/deployment/backends/interface.go (2)
go/apps/ctrl/services/deployment/backend_adapter.go (1)
  • DeploymentBackend (16-20)
go/apps/ctrl/services/deployment/backends/docker.go (1)
  • NewDockerBackend (40-63)
go/apps/ctrl/services/deployment/deploy_workflow.go (1)
go/apps/ctrl/services/deployment/backend_adapter.go (1)
  • NewDeploymentBackend (120-132)
go/apps/ctrl/services/deployment/backends/k8s.go (1)
go/pkg/ptr/pointer.go (1)
  • P (49-51)
🔇 Additional comments (11)
deployment/docker-compose.yaml (4)

207-213: LGTM: gw now terminates TLS on 80/443 and mounts certs.

Matches the new local-cert flow.


250-260: Expose only what you need; confirm docker backend signal is actually consumed.

Setting UNKEY_METALD_BACKEND=docker and UNKEY_DOCKER_RUNNING=true is correct for local fallback, but the new flag isn’t plumbed through ctrl config (see comment on go/cmd/ctrl/main.go). Until wired, the Docker backend may report hosts assuming “outside Docker”.


44-54: NGINX LB exposure looks fine. Verify nginx.apiv2.conf exists and maps 2112 correctly.


81-92: Redis image tag sanity check.

redis:8.0 may not exist on all registries yet. If this fails to pull, use the latest 7.x stable you rely on.

go/cmd/gw/main.go (1)

90-93: LGTM: adds require-local-cert flag with ENV binding.

Wired through to gw.Config; aligns with run.go local-cert step. Consider documenting that vault must be configured when enabling this.

go/apps/ctrl/services/deployment/backend_adapter.go (2)

63-73: LGTM: plumbs isRunningDocker to local backends.

Adapter cleanly forwards the flag to backends.NewBackend.


119-132: LGTM: backend selection with docker/k8s fallback.

Make sure callers pass the same isRunningDocker through DeployWorkflow.

go/apps/ctrl/services/deployment/deploy_workflow.go (2)

44-44: Backend constructor updated correctly.

Signature matches NewDeploymentBackend(metalD, fallbackType, logger, isRunningDocker). No issues.


32-38: Plumb IsRunningDocker from config/CLI and document its default.
Declared at go/apps/ctrl/services/deployment/deploy_workflow.go:38 and passed into NewDeploymentBackend via NewDeployWorkflow at deploy_workflow.go:44. Only these occurrences found — confirm DeployWorkflowConfig.IsRunningDocker is populated where the config is constructed (CLI/env/config) and add documentation for its default behavior.

go/apps/ctrl/services/deployment/backends/k8s.go (2)

100-149: Job path: ActiveDeadlineSeconds ties to TTL; confirm intent.

ActiveDeadlineSeconds == ttlSeconds aborts long-running jobs early; if TTL is meant for post-finish cleanup only, reduce/decouple it.


316-336: ClusterIP with NodePort service: confirm caller expectations.

You return ClusterIP:8080 regardless of Service type. If external access is needed, either return NodePort and node address or switch Service to ClusterIP and ensure in-cluster consumers.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (4)
go/apps/ctrl/services/deployment/backends/k8s.go (1)

214-235: Service type vs returned address/port mismatch.

You create a NodePort service but return ClusterIP:8080. Either:

  • Use ClusterIP service if ctrl runs in‑cluster, or
  • Keep NodePort and return a reachable node IP + service.Spec.Ports[0].NodePort.

Pick one; current combo is misleading and likely breaks external reachability.

-        Spec: corev1.ServiceSpec{
-            Type: corev1.ServiceTypeNodePort,
+        Spec: corev1.ServiceSpec{
+            Type: corev1.ServiceTypeClusterIP,
@@
-    // Always use cluster IP and container port for backend communication
-    host := service.Spec.ClusterIP
-    port := int32(8080) // Always use container port for backend service calls
+    // Use ClusterIP:ServicePort when running in‑cluster
+    host := service.Spec.ClusterIP
+    port := int32(8080)

Or, if you need NodePort, I can provide a patch to detect a node IP and use service.Spec.Ports[0].NodePort.

Also applies to: 322-336

deployment/docker-compose.yaml (3)

151-158: Fix MinIO console port mapping (2903 → 3903).

Console is configured for 3903 but the port mapping exposes 2903, so the console won’t be reachable.

Apply this diff:

   ports:
     - 3902:3902
-    - 2903:2903
+    - 3903:3903
   environment:
     MINIO_ROOT_USER: minio_root_user
     MINIO_ROOT_PASSWORD: minio_root_password
     MINIO_API_PORT_NUMBER: 3902
     MINIO_CONSOLE_PORT_NUMBER: 3903

Note: Aligns with prior learning (imeyer, 2025‑08‑08) to use 3902/3903.


56-59: Compose ignores deploy.replicas; use scale for local runs.

docker compose doesn’t honor deploy.replicas outside Swarm. Use docker compose up --scale apiv2=3 or document an override.


353-358: Drop unused volumes.

clickhouse-keeper and metald-aio-data aren’t referenced by active services.

Apply:

 volumes:
   mysql:
   clickhouse:
-  clickhouse-keeper:
   s3:
-  metald-aio-data:
♻️ Duplicate comments (8)
go/apps/ctrl/services/deployment/backends/k8s.go (2)

69-75: TTL hardcoded; make configurable.

Expose via UNKEY_K8S_TTL_SECONDS (0 disables). Keeps behavior flexible across clusters.

-        ttlSeconds:  7200, // 2 hours default TTL for auto-cleanup
+        ttlSeconds:  getTTLFromEnvOrDefault(7200), // 0 disables

Additional code (same file):

// near imports: add "strconv"
func getTTLFromEnvOrDefault(def int32) int32 {
    if v := os.Getenv("UNKEY_K8S_TTL_SECONDS"); v != "" {
        if n, err := strconv.Atoi(v); err == nil && n >= 0 && n <= 86400 {
            return int32(n)
        }
    }
    return def
}

282-300: Job status mapping: treat Succeeded as completed, not RUNNING.

Avoid reporting completed Jobs as RUNNING.

-        if job.Status.Succeeded > 0 {
-            state = metaldv1.VmState_VM_STATE_RUNNING
-        } else if job.Status.Failed > 0 {
+        if job.Status.Active > 0 {
+            state = metaldv1.VmState_VM_STATE_RUNNING
+        } else if job.Status.Succeeded > 0 {
+            state = metaldv1.VmState_VM_STATE_SHUTDOWN // or introduce COMPLETED
+        } else if job.Status.Failed > 0 {
             state = metaldv1.VmState_VM_STATE_SHUTDOWN
-        } else if job.Status.Active > 0 {
-            state = metaldv1.VmState_VM_STATE_RUNNING
         } else {
             state = metaldv1.VmState_VM_STATE_CREATED
         }
deployment/setup-wildcard-dns.sh (5)

5-5: Enable strict shell mode and sane IFS.

Prevents silent failures and catches unset vars.

-set -e
+set -Eeuo pipefail
+IFS=$'\n\t'

75-86: macOS: Don’t clobber dnsmasq.conf; write a dedicated file under dnsmasq.d and include it.

Overwriting the main config risks breaking existing setups.

-    DNSMASQ_CONF="$(brew --prefix)/etc/dnsmasq.conf"
+    DNSMASQ_CONF="$(brew --prefix)/etc/dnsmasq.conf"
+    DNSMASQ_DIR="$(brew --prefix)/etc/dnsmasq.d"
@@
-    # Add our configuration
-    echo "address=/unkey.local/127.0.0.1" > "$DNSMASQ_CONF"
+    # Add our configuration without clobbering the main file
+    mkdir -p "$DNSMASQ_DIR"
+    echo "address=/unkey.local/127.0.0.1" > "$DNSMASQ_DIR/unkey.local.conf"
+    # Ensure the main config includes dnsmasq.d
+    if ! grep -qE '^\s*conf-dir=.*dnsmasq\.d' "$DNSMASQ_CONF"; then
+      echo "conf-dir=$(brew --prefix)/etc/dnsmasq.d,*.conf" >> "$DNSMASQ_CONF"
+    fi

91-94: macOS: Avoid sudo with brew services.

Use user-level services to prevent permission issues.

-        sudo brew services start dnsmasq
+        brew services start dnsmasq
@@
-    echo "  sudo brew services stop dnsmasq"
+    echo "  brew services stop dnsmasq"

Also applies to: 148-151


107-115: Linux: Pin dnsmasq to loopback.

Add listen-address to avoid binding surprises on multi-NIC hosts.

-    {
+    {
       echo "# Unkey local development DNS configuration"
       echo "# Resolve all *.unkey.local domains to localhost"
-      echo "address=/unkey.local/127.0.0.1"
+      echo "listen-address=127.0.0.1"
+      echo "address=/unkey.local/127.0.0.1"
     } | sudo tee "$DNSMASQ_CONF" > /dev/null

131-136: Linux: Automate systemd-resolved split-DNS drop-in (optional but recommended).

Without this, many distros won’t query dnsmasq for unkey.local.

-    if systemctl is-active --quiet systemd-resolved; then
-        echo ""
-        echo "systemd-resolved detected. You may need to configure it to use dnsmasq."
-        echo "Add 'DNS=127.0.0.1' to /etc/systemd/resolved.conf and restart systemd-resolved"
-    fi
+    if systemctl is-active --quiet systemd-resolved; then
+        echo ""
+        echo "Configuring systemd-resolved split DNS for ~unkey.local ..."
+        sudo mkdir -p /etc/systemd/resolved.conf.d
+        sudo tee /etc/systemd/resolved.conf.d/unkey-local.conf >/dev/null <<'EOF'
+[Resolve]
+DNS=127.0.0.1
+Domains=~unkey.local
+EOF
+        sudo systemctl restart systemd-resolved
+        echo "systemd-resolved configured for unkey.local"
+    fi
deployment/docker-compose.yaml (1)

220-221: Thanks for adding the scheme to UNKEY_CTRL_ADDR.

This resolves the connect client URL issue flagged earlier.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 312a280 and 6e6adf0.

📒 Files selected for processing (6)
  • deployment/docker-compose.yaml (12 hunks)
  • deployment/setup-wildcard-dns.sh (1 hunks)
  • go/apps/ctrl/config.go (1 hunks)
  • go/apps/ctrl/services/deployment/backends/k8s.go (8 hunks)
  • go/apps/ctrl/services/deployment/deploy_workflow.go (3 hunks)
  • go/cmd/ctrl/main.go (2 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-08-08T14:55:11.981Z
Learnt from: imeyer
PR: unkeyed/unkey#3755
File: deployment/docker-compose.yaml:179-184
Timestamp: 2025-08-08T14:55:11.981Z
Learning: In deployment/docker-compose.yaml (development only), MinIO is configured with API on 3902 and console on 3903; ports should map 3902:3902 and 3903:3903 to match MINIO_API_PORT_NUMBER and MINIO_CONSOLE_PORT_NUMBER.

Applied to files:

  • deployment/docker-compose.yaml
🧬 Code graph analysis (3)
go/apps/ctrl/services/deployment/backends/k8s.go (1)
go/pkg/ptr/pointer.go (1)
  • P (49-51)
go/cmd/ctrl/main.go (1)
go/pkg/cli/flag.go (2)
  • Bool (411-447)
  • EnvVar (287-304)
go/apps/ctrl/services/deployment/deploy_workflow.go (1)
go/apps/ctrl/services/deployment/backend_adapter.go (1)
  • NewDeploymentBackend (120-132)
🪛 Gitleaks (8.27.2)
deployment/docker-compose.yaml

[high] 79-79: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


[high] 232-232: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)


[high] 265-265: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Test API / API Test Local
  • GitHub Check: Test Go API Local / Test
  • GitHub Check: Build / Build
🔇 Additional comments (12)
go/apps/ctrl/config.go (1)

89-92: LGTM: Config gains IsRunningDocker.

Clear docstring; no validation coupling needed here.

go/cmd/ctrl/main.go (3)

79-81: Add docker-running flag: LGTM.

Flag wiring and env var look correct.


147-151: Propagating docker-running into config: LGTM.

Correctly passed to ctrl.Config.


79-80: Align CLI default-domain with local DNS/TLS (use unkey.local)

CLI currently defaults to "unkey.app" while local TLS/scripts use "unkey.local" — pick one to avoid inconsistent auto-generated hostnames and gateway-skip logic; prefer switching the CLI default to "unkey.local" and preserve the UNKEY_DEFAULT_DOMAIN env override.

File: go/cmd/ctrl/main.go (lines 79–80)

Couldn’t verify repo-wide references (ripgrep returned "No files were searched"). Re-run verification:
rg -nI --hidden -C2 -e 'unkey.local' -e 'UNKEY_DEFAULT_DOMAIN' -e 'default-domain'
and confirm TLS/gateway handling is consistent.

go/apps/ctrl/services/deployment/deploy_workflow.go (2)

31-39: Extend DeployWorkflowConfig with IsRunningDocker: LGTM.

Field is passed downstream; no immediate use here — that’s fine.


44-45: Backend constructor now receives isRunningDocker: LGTM.

Matches adapter signature; good propagation.

deployment/docker-compose.yaml (6)

247-248: Docker socket mount is high‑privilege.

Acceptable for local dev, but treat the host as fully exposed to the container. Gate usage behind UNKEY_DOCKER_RUNNING and avoid using this in CI.


256-260: metald-aio is commented out; METALD address may break ctrl startup.

Confirm ctrl doesn’t attempt to reach METALD when backend=docker; otherwise remove/guard these vars or re‑enable metald‑aio.


211-213: Certs mount and RequireLocalCert=true: ensure automation and Git hygiene.

  • Verify local cert auto‑generation runs on first boot; otherwise gw will crash.
  • Confirm certs/ is gitignored (private keys must not be committed).

Also applies to: 217-221


276-285: No action required — scrape target matches depends_on.

deployment/config/prometheus.yml contains url: http://apiv2:2112/sd, so depends_on: apiv2 is correct; only change if you intend to scrape apiv2_lb instead.

Likely an incorrect or invalid review comment.


47-49: deployment/nginx.apiv2.conf exists — no action required.
deployment/nginx.apiv2.conf is present, so the docker-compose volume mount for apiv2_lb is valid.


83-83: Verified — redis:8.0 exists on Docker Hub. Tag is published (examples: redis:8.0, redis:8.0.3). No CI pull-failure expected; optional: pin to a specific patch (e.g., redis:8.0.3) or upgrade to the current stable series (redis:8.2).

UNKEY_OTEL: true
OTEL_EXPORTER_OTLP_ENDPOINT: "http://otel:4318"
OTEL_EXPORTER_OTLP_PROTOCOL: "http/protobuf"
UNKEY_OTEL: false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Quote boolean env to avoid YAML pitfalls.

Make intent explicit.

Apply:

-      UNKEY_OTEL: false
+      UNKEY_OTEL: "false"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
UNKEY_OTEL: false
UNKEY_OTEL: "false"
🧰 Tools
🪛 Checkov (3.2.334)

[low] 73-74: Base64 High Entropy String

(CKV_SECRET_6)

🤖 Prompt for AI Agents
In deployment/docker-compose.yaml around line 74, the environment variable
UNKEY_OTEL is set to an unquoted boolean (false) which can be misinterpreted by
YAML parsers; update the value to a quoted string (e.g., "false" or 'false') to
make the intent explicit and avoid YAML boolean parsing pitfalls.

Comment on lines 75 to 79
VAULT_S3_URL: "http://s3:3902"
VAULT_S3_BUCKET: "vault"
VAULT_S3_ACCESS_KEY_ID: "minio_root_user"
VAULT_S3_ACCESS_KEY_SECRET: "minio_root_password"
VAULT_MASTER_KEYS: "Ch9rZWtfMmdqMFBJdVhac1NSa0ZhNE5mOWlLSnBHenFPENTt7an5MRogENt9Si6wms4pQ2XIvqNSIgNpaBenJmXgcInhu6Nfv2U="
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Unify S3 bucket name or pre‑create both to avoid 404s.

apiv2/agent use “vault” while gw/ctrl use “acme‑vault”. Either standardize or auto‑create both buckets at MinIO startup.

Apply this diff to standardize on “vault”:

-      VAULT_S3_BUCKET: "vault"
+      VAULT_S3_BUCKET: "vault"
@@
-      VAULT_S3_BUCKET: "vault"
+      VAULT_S3_BUCKET: "vault"
@@
-      UNKEY_VAULT_S3_BUCKET: "acme-vault"
+      UNKEY_VAULT_S3_BUCKET: "vault"
@@
-      UNKEY_VAULT_S3_BUCKET: "acme-vault"
+      UNKEY_VAULT_S3_BUCKET: "vault"

Alternatively, keep both names and pre‑create them at MinIO:

# in s3.environment (add)
MINIO_DEFAULT_BUCKETS: "vault,acme-vault"

Also applies to: 108-113, 228-233, 261-266

🧰 Tools
🪛 Gitleaks (8.27.2)

[high] 79-79: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🤖 Prompt for AI Agents
In deployment/docker-compose.yaml around lines 75 to 79 (and also at 108-113,
228-233, 261-266) the S3 bucket names differ between services (some use "vault",
others "acme-vault"), causing 404s; either standardize all VAULT_S3_BUCKET env
values to the same name (e.g., "vault") across those locations, or keep both
names and add a MinIO startup env var to pre-create both buckets (set
MINIO_DEFAULT_BUCKETS to "vault,acme-vault" in the s3 service environment
block); update every occurrence listed so all services reference existing
buckets.

VAULT_S3_BUCKET: "vault"
VAULT_S3_ACCESS_KEY_ID: "minio_root_user"
VAULT_S3_ACCESS_KEY_SECRET: "minio_root_password"
VAULT_MASTER_KEYS: "Ch9rZWtfMmdqMFBJdVhac1NSa0ZhNE5mOWlLSnBHenFPENTt7an5MRogENt9Si6wms4pQ2XIvqNSIgNpaBenJmXgcInhu6Nfv2U="
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

⚠️ Potential issue

Hardcoded master keys/tokens in VCS (even for dev) — move to env files.

Store secrets in a local .env (gitignored) and reference via env_file or placeholders; avoid committing real values.

Minimal change:

-      VAULT_MASTER_KEYS: "Ch9rZ..."
+      VAULT_MASTER_KEYS: "${VAULT_MASTER_KEYS:?set in .env}"
@@
-      UNKEY_VAULT_MASTER_KEYS: "Ch9rZ..."
+      UNKEY_VAULT_MASTER_KEYS: "${UNKEY_VAULT_MASTER_KEYS:?set in .env}"

Add an example .env.example with dummy values and keep .env in .gitignore.

Also applies to: 232-232, 265-265

🧰 Tools
🪛 Gitleaks (8.27.2)

[high] 79-79: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.

(generic-api-key)

🤖 Prompt for AI Agents
In deployment/docker-compose.yaml around line 79 (and also at lines 232 and
265), a real VAULT_MASTER_KEYS value is hardcoded into version control; remove
the secret value and replace it with an environment variable reference (e.g. use
${VAULT_MASTER_KEYS}) or load via an env_file. Add a gitignored .env file at the
repo root containing the actual secret for local use and commit a .env.example
with dummy placeholder values to show required keys. Update .gitignore to
include .env so secrets are not committed.

Comment on lines +207 to 209
- "80:80"
- "443:443"
depends_on:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Host ports 80/443 can collide with local services.

Consider mapping to 8080/8443 for local dev or make host ports configurable via .env.

Example:

-      - "80:80"
-      - "443:443"
+      - "${GW_HTTP_PORT:-8080}:80"
+      - "${GW_HTTPS_PORT:-8443}:443"

Also applies to: 214-221

🤖 Prompt for AI Agents
In deployment/docker-compose.yaml around lines 207-209 (and similarly lines
214-221) the compose file maps container ports to host ports 80 and 443 which
can collide with local services; update the port mappings to use non-privileged
defaults (for example map host 8080->container 80 and 8443->container 443) and
make them configurable via environment variables by replacing the literals with
variables (e.g. ${HTTP_PORT:-8080}:80 and ${HTTPS_PORT:-8443}:443), add those
variables to .env with the default values, and update any README/docs to note
the new configurable ports.

Comment on lines +223 to +224
UNKEY_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/partition_001?parseTime=true&interpolateParams=true"
UNKEY_KEYS_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

DSNs reference databases that may not exist (partition_001, hydra).

Unless migrations create them, MySQL will error on connect. Either create DBs on init or point to “unkey”.

Apply this diff (Option A: single DB for local):

-      UNKEY_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/partition_001?parseTime=true&interpolateParams=true"
-      UNKEY_KEYS_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"
+      UNKEY_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"
+      UNKEY_KEYS_DATABASE_PRIMARY: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"
@@
-      UNKEY_DATABASE_HYDRA: "unkey:password@tcp(mysql:3306)/hydra?parseTime=true&interpolateParams=true"
-      UNKEY_DATABASE_PARTITION: "unkey:password@tcp(mysql:3306)/partition_001?parseTime=true&interpolateParams=true"
+      UNKEY_DATABASE_HYDRA: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"
+      UNKEY_DATABASE_PARTITION: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"

Option B (keep DSNs): add an init script to create DBs:

-- deployment/mysql-init.sql
CREATE DATABASE IF NOT EXISTS hydra;
CREATE DATABASE IF NOT EXISTS partition_001;
GRANT ALL ON hydra.* TO 'unkey'@'%';
GRANT ALL ON partition_001.* TO 'unkey'@'%';

Then mount it under mysql:

  • ./deployment/mysql-init.sql:/docker-entrypoint-initdb.d/01-init.sql:ro

Also applies to: 250-252

🤖 Prompt for AI Agents
In deployment/docker-compose.yaml around lines 223-224 (and also check lines
250-252), the configured DSNs reference databases (partition_001, hydra) that
may not exist and will cause MySQL connection errors; either change the DSN
values to point to a single existing database (e.g., use "unkey" for both
UNKEY_DATABASE_PRIMARY and UNKEY_KEYS_DATABASE_PRIMARY) or add a MySQL init SQL
script that creates the required databases and grants permissions (CREATE
DATABASE IF NOT EXISTS hydra; CREATE DATABASE IF NOT EXISTS partition_001; GRANT
ALL ... to 'unkey'@'%') and mount it into the container under
/docker-entrypoint-initdb.d/01-init.sql:ro in the docker-compose service
definition so the DBs exist before the application connects.

Comment on lines +30 to +40
if [[ "$OS" == "macos" ]]; then
echo "Would you like to install dnsmasq using Homebrew? (y/n)"
else
echo "Would you like to install dnsmasq using your package manager? (y/n)"
fi
read -r response
if [[ "$response" != "y" ]]; then
echo "Installation cancelled"
exit 1
fi

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Make prompts scriptable and idempotent.

Accept y/yes/Y and support non-interactive mode via UNKEY_NONINTERACTIVE=1 to unblock CI/devcontainers.

-    read -r response
-    if [[ "$response" != "y" ]]; then
+    if [[ "${UNKEY_NONINTERACTIVE:-0}" == "1" ]]; then
+        response="y"
+    else
+        read -r response
+    fi
+    shopt -s nocasematch
+    if [[ ! "$response" =~ ^y(es)?$ ]]; then
         echo "Installation cancelled"
         exit 1
     fi
+    shopt -u nocasematch
@@
-    read -r response
+    if [[ "${UNKEY_NONINTERACTIVE:-0}" == "1" ]]; then response="y"; else read -r response; fi
@@
-    read -r response
+    if [[ "${UNKEY_NONINTERACTIVE:-0}" == "1" ]]; then response="y"; else read -r response; fi

Also applies to: 88-95, 118-129

🤖 Prompt for AI Agents
deployment/setup-wildcard-dns.sh around lines 30-40 (also apply the same changes
at 88-95 and 118-129): currently prompts are interactive and only accept a
single lowercase "y"; modify each prompt block to be scriptable and idempotent
by 1) treating UNKEY_NONINTERACTIVE=1 as non-interactive auto-accept (skip read
and proceed), 2) accepting "y", "Y", "yes", and "YES" as affirmative responses
when prompting, and 3) before prompting, check idempotency conditions (e.g.,
detect if dnsmasq is already installed or the configuration already applied) and
skip installation/configuration steps if already present. Ensure the read uses
read -r response and normalize response to lowercase for comparison when not in
non-interactive mode.

Comment on lines +599 to +616
// isLocalHostname checks if a hostname should be skipped from gateway config creation
// Returns true for localhost/development domains that shouldn't get gateway configs
func isLocalHostname(hostname, defaultDomain string) bool {
// Lowercase for case-insensitive comparison
hostname = strings.ToLower(hostname)
defaultDomain = strings.ToLower(defaultDomain)

// Exact matches for common local hosts
// Exact matches for common local hosts - these should be skipped
if hostname == "localhost" || hostname == "127.0.0.1" {
return true
}

// Check for local-only TLD suffixes
// If hostname uses the default domain, it should NOT be skipped (return false)
// This allows gateway configs to be created for the default domain
if strings.HasSuffix(hostname, "."+defaultDomain) || hostname == defaultDomain {
return false
}

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick (assertive)

Broaden local-host detection; keep default-domain exception.

Consider also ::1, 0.0.0.0, host.docker.internal, gateway.docker.internal, and .localhost; avoids creating gateway configs for obviously local-only hosts.

 func isLocalHostname(hostname, defaultDomain string) bool {
   // Lowercase for case-insensitive comparison
   hostname = strings.ToLower(hostname)
   defaultDomain = strings.ToLower(defaultDomain)

-  // Exact matches for common local hosts - these should be skipped
-  if hostname == "localhost" || hostname == "127.0.0.1" {
+  // Exact matches for common local hosts - these should be skipped
+  switch hostname {
+  case "localhost", "127.0.0.1", "::1", "0.0.0.0", "host.docker.internal", "gateway.docker.internal":
     return true
-  }
+  }
@@
-  localSuffixes := []string{
-    ".local",
-    ".test",
-  }
+  localSuffixes := []string{".local", ".test", ".localhost"}

Also applies to: 617-631

🤖 Prompt for AI Agents
In go/apps/ctrl/services/deployment/deploy_workflow.go around lines 599-616 (and
also apply the same change to 617-631), broaden the local-host detection while
preserving the existing default-domain exception: add checks so the function
returns true for "::1", "0.0.0.0", "host.docker.internal",
"gateway.docker.internal" and for any hostname that ends with ".localhost"
(case-insensitive), but still return false when the hostname equals or has a
suffix of the configured defaultDomain; keep the existing lowercase
normalization and the existing localhost/127.0.0.1 checks and apply identical
logic to the other mentioned lines.

@chronark chronark merged commit c319425 into main Sep 15, 2025
22 of 23 checks passed
@chronark chronark deleted the feat/run-deploy-locally branch September 15, 2025 12:02
@coderabbitai coderabbitai bot mentioned this pull request Oct 21, 2025
18 tasks
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants