Conversation
|
|
The latest updates on your projects. Learn more about Vercel for GitHub.
1 Skipped Deployment
|
📝 WalkthroughWalkthroughAdds Kubernetes development tooling and manifests: namespace, RBAC, infra (MySQL, ClickHouse, Redis, S3, PlanetScale), service deployments (api, gw, ctrl, dashboard), a Tiltfile for live-reload builds, Makefile targets to orchestrate k8s workflows, Dockerfile variants (including Tilt runtime), and K8S development docs. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Dev as Developer
participant Make as Makefile
participant Tilt as Tilt
participant Img as Image builder
participant K8s as Kubernetes API
Dev->>Make: make k8s-up / make start-<service>
Make->>Img: build images (optional)
Make->>K8s: create namespace / apply manifests
K8s-->>Make: pods start (initContainers may block)
Note right of K8s: initContainers wait for mysql/clickhouse/s3 etc.
K8s-->>Make: readiness probes pass
Make-->>Dev: environment ready
Dev->>Tilt: tilt up [--services=...]
Tilt->>Img: docker_build_with_restart (live_update)
Tilt->>K8s: apply manifests / port-forwards
K8s-->>Tilt: pod image/binary updates on sync
Tilt-->>Dev: unified logs and endpoints
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✨ Finishing touches🧪 Generate unit tests
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
1905111 to
1bdae1e
Compare
chronark
left a comment
There was a problem hiding this comment.
the gateway has issues starting, I see this in the logs
unable to create listener: listen tcp :8080: bind: address already in use
other than that, tilt looks really nice
Mind pulling the latest changes and trying again? If it still says this mind checking whats running on your 8080 port?
|
|
Thank you for following the naming conventions for pull request titles! 🙏 |
There was a problem hiding this comment.
Actionable comments posted: 13
🧹 Nitpick comments (47)
go/Dockerfile.tilt (1)
1-9: Pin base image and drop rootFor safer, reproducible dev images, pin BusyBox and run as non‑root.
Apply:
-FROM busybox:latest +FROM busybox:1.36.1 # Copy the pre-built binary COPY bin/unkey /unkey LABEL org.opencontainers.image.source=https://github.com/unkeyed/unkey/go LABEL org.opencontainers.image.description="Unkey API" +USER 65532:65532 +STOPSIGNAL SIGTERM ENTRYPOINT ["/unkey"]go/k8s/manifests/redis.yaml (2)
18-37: Add pod/container securityContext and resource requestsHarden defaults and avoid noisy cluster scheduling.
Apply:
spec: containers: - name: redis image: redis:8.0 + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + seccompProfile: + type: RuntimeDefault + capabilities: + drop: ["ALL"] + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 500m + memory: 512Mi
54-54: Prefer ClusterIP for local dev unless you have MetalLBLoadBalancer may stay Pending on kind/minikube; ClusterIP + tilt port‑forward is simpler.
- type: LoadBalancer + type: ClusterIPgo/Dockerfile (2)
1-1: Use a non‑root distroless base (and consider digest pinning)Switch default to the nonroot variant (or pin a digest) to avoid root in runtime.
-ARG BASE_IMAGE=gcr.io/distroless/static-debian12:latest +ARG BASE_IMAGE=gcr.io/distroless/static-debian12:nonroot
14-21: Run as non‑root explicitly; minor cleanupMake the intent explicit and keep entrypoint formatting consistent.
FROM ${BASE_IMAGE} COPY --from=builder /go/src/github.com/unkeyed/unkey/go/bin/unkey / LABEL org.opencontainers.image.source=https://github.com/unkeyed/unkey/go LABEL org.opencontainers.image.description="Unkey API" -ENTRYPOINT ["/unkey"] +USER 65532:65532 +ENTRYPOINT ["/unkey"]Also confirm the builder tag is valid for your CI runtime:
- Current: FROM golang:1.24 AS builder
- If 1.24 isn't published yet on your runners, drop to 1.22/1.23.
go/k8s/manifests/api.yaml (2)
20-23: imagePullPolicy: Never will break outside localGate behind a dev overlay or switch to IfNotPresent for broader compatibility.
- imagePullPolicy: Never # Use local images + imagePullPolicy: IfNotPresent
95-131: Service duplication and LoadBalancer usageTwo LB Services for the same pods will consume extra IPs. For local clusters, prefer one ClusterIP and rely on tilt port‑forward; optionally add a single NodePort/LB for 80.
Would you like a kustomize dev overlay to flip types?
go/k8s/manifests/gw.yaml (2)
90-99: Dependency wait: consider the same /dev/tcp patternUnify with api init wait to avoid nc dependency variance.
112-120: Port 8080 conflict reported: add easy override pathThe bind conflict reported in comments suggests another local process on 8080. Provide a Tilt config knob to override UNKEY_HTTP_PORT (and Service port) for gw during local runs.
I can patch Tiltfile/manifests to accept a gwPort param and plumb it through.
go/k8s/manifests/dashboard.yaml (2)
31-35: Add securityContext and resourcesHarden the pod and give the scheduler hints.
- name: dashboard image: unkey/dashboard:latest imagePullPolicy: Never + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + seccompProfile: { type: RuntimeDefault } + capabilities: { drop: ["ALL"] } + resources: + requests: { cpu: 100m, memory: 128Mi } + limits: { cpu: 500m, memory: 512Mi } ports:
80-80: Prefer ClusterIP for local devLB may require MetalLB. Port‑forward from Tilt is usually simpler.
- type: LoadBalancer + type: ClusterIPgo/k8s/manifests/s3.yaml (3)
32-66: Harden the container and add resourcesApply baseline securityContext and requests/limits.
containers: - name: minio image: bitnami/minio:2025.7.23 + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + seccompProfile: { type: RuntimeDefault } + capabilities: { drop: ["ALL"] } + resources: + requests: { cpu: 200m, memory: 256Mi } + limits: { cpu: 1, memory: 1Gi }
83-88: Service types for local clustersUnless you run MetalLB, switch both Services to ClusterIP and rely on port‑forwarding.
- type: LoadBalancer + type: ClusterIPAlso applies to: 101-107
5-12: PVC: consider explicit storageClass and smaller default50Gi may be heavy for laptops. Consider 5–10Gi and optionally set storageClassName for kind/minikube.
go/k8s/manifests/rbac.yaml (1)
3-9: Consider disabling auto token mount if not required.Set automountServiceAccountToken: false on the ServiceAccount and explicitly mount tokens only when needed.
kind: ServiceAccount metadata: name: ctrl-service-account namespace: unkey labels: app: ctrl +automountServiceAccountToken: falsego/k8s/manifests/ctrl.yaml (5)
21-26: Harden container: run as non‑root, drop caps, no privilege escalation, seccomp.- name: ctrl image: unkey-ctrl:latest imagePullPolicy: Never # Use local images + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + capabilities: + drop: ["ALL"] + seccompProfile: + type: RuntimeDefault ports: - containerPort: 7091
75-83: Init check with BusyBox nc may fail (-z not always supported). Use /dev/tcp fallback.- "until nc -z mysql 3306 && nc -z s3 3902; do echo waiting for dependencies; sleep 2; done;", + "until (echo >/dev/tcp/mysql/3306) 2>/dev/null && (echo >/dev/tcp/s3/3902) 2>/dev/null; do echo waiting for dependencies; sleep 2; done;",
85-101: Service type LoadBalancer is unnecessary with Tilt port‑forwards; prefer ClusterIP for local.type: LoadBalancer + # For local dev, ClusterIP + Tilt port_forwards is simpler and avoids Node LB quirks. + # type: ClusterIPIf you want to switch now:
- type: LoadBalancer + type: ClusterIP
33-41: UNKEY_IMAGE says unkey:latest but container image is unkey-ctrl:latest. Align or drop.Is UNKEY_IMAGE consumed by telemetry/versioning? If not, remove or set to unkey-ctrl:latest for clarity.
18-26: Add readiness/liveness probes for ctrl.Without probes, restarts and rollouts are blind.
- name: ctrl @@ ports: - containerPort: 7091 + readinessProbe: + httpGet: + path: /healthz + port: 7091 + initialDelaySeconds: 5 + periodSeconds: 5 + livenessProbe: + httpGet: + path: /healthz + port: 7091 + initialDelaySeconds: 20 + periodSeconds: 10Adjust path to your actual health endpoint.
go/k8s/manifests/observability.yaml (3)
36-51: Harden Prometheus container and add basic resources.containers: - name: prometheus image: prom/prometheus:v2.45.0 + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + capabilities: { drop: ["ALL"] } + seccompProfile: { type: RuntimeDefault } + resources: + requests: { cpu: "50m", memory: "128Mi" } + limits: { cpu: "200m", memory: "512Mi" }
91-107: Harden OTEL Collector and cap resources.containers: - name: otel-collector image: otel/opentelemetry-collector-contrib:0.92.0 + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + capabilities: { drop: ["ALL"] } + seccompProfile: { type: RuntimeDefault } + resources: + requests: { cpu: "50m", memory: "160Mi" } + limits: { cpu: "200m", memory: "256Mi" }
65-71: NodePort exposure is fine for OrbStack; otherwise prefer ClusterIP + Tilt port‑forwards.If you’re using Tilt forwards already, reduce surface:
- type: NodePort + type: ClusterIPAlso applies to: 167-183
go/k8s/manifests/clickhouse.yaml (4)
50-79: Probes use clickhouse-client; prefer HTTP /ping to avoid CLI dependency.- readinessProbe: - exec: - command: [ "clickhouse-client", "--host", "localhost", "--user", "default", "--password", "password", "--query", "SELECT 1" ] + readinessProbe: + httpGet: + path: /ping + port: 8123 initialDelaySeconds: 30 periodSeconds: 10 timeoutSeconds: 5 - livenessProbe: - exec: - command: [ "clickhouse-client", "--host", "localhost", "--user", "default", "--password", "password", "--query", "SELECT 1" ] + livenessProbe: + httpGet: + path: /ping + port: 8123 initialDelaySeconds: 60 periodSeconds: 30 timeoutSeconds: 5
32-40: Harden container and add basic resources.- name: clickhouse image: unkey/clickhouse:latest + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + capabilities: { drop: ["ALL"] } + seccompProfile: { type: RuntimeDefault } + resources: + requests: { cpu: "100m", memory: "256Mi" } + limits: { cpu: "500m", memory: "1Gi" }
1-13: PVC size 100Gi is heavy for local; default to 10Gi unless you need more.- storage: 100Gi + storage: 10Gi
97-105: LoadBalancer unnecessary locally; prefer ClusterIP + Tilt port‑forward.- type: LoadBalancer + type: ClusterIPgo/k8s/manifests/mysql.yaml (3)
32-46: Harden container and set resource requests/limits.containers: - name: mysql image: unkey/mysql:latest imagePullPolicy: Never + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + capabilities: { drop: ["ALL"] } + seccompProfile: { type: RuntimeDefault } + resources: + requests: { cpu: "100m", memory: "256Mi" } + limits: { cpu: "500m", memory: "1Gi" }
53-61: Probes: add timeout/failure thresholds for faster feedback.readinessProbe: exec: command: [ "mysqladmin", "ping", "-h", "localhost" ] initialDelaySeconds: 30 periodSeconds: 10 + timeoutSeconds: 3 + failureThreshold: 6 livenessProbe: exec: command: [ "mysqladmin", "ping", "-h", "localhost" ] initialDelaySeconds: 60 periodSeconds: 30 + timeoutSeconds: 3 + failureThreshold: 5Also applies to: 62-69
86-90: Prefer ClusterIP for local; Tilt already port‑forwards 3306.- type: LoadBalancer + type: ClusterIPgo/Tiltfile (4)
9-11: Help text omits services you support; include api, gateway/gw, dashboard, redis.-config.define_string_list("services", args=False, usage="Services to run (mysql,ctrl,clickhouse,s3,observability,planetscale or all)") +config.define_string_list("services", args=False, usage="Services to run (mysql,ctrl,api,gateway,gw,dashboard,clickhouse,s3,observability,planetscale,redis or all)")
291-299: Printed Gateway URL is wrong (6060). Your port_forwards are 8080/8443.-Gateway: http://localhost:6060 +Gateway: http://localhost:8080 (HTTP) +Gateway TLS: https://localhost:8443
271-312: Message mentions ingress but none is defined. Remove or clarify.-- Use ingress for service accessOptionally add “Use Tilt port-forwards above to access services.”
201-206: 8080 conflicts are common; make gw ports configurable via config options.+# At top near other config: +# config.define_int("gw_http_port", args=False, usage="Gateway HTTP port (default 8080)") +# config.define_int("gw_https_port", args=False, usage="Gateway HTTPS port (default 8443)") # # Then use: - port_forwards=['8080:8080', '8443:8443'], + port_forwards=[f'{cfg.get("gw_http_port", 8080)}:8080', f'{cfg.get("gw_https_port", 8443)}:8443'],go/K8S_DEVELOPMENT.md (4)
149-153: Ctrl port docs (8084) disagree with manifests (7091). Align.-- `UNKEY_HTTP_PORT`: Service port (8084) +- `UNKEY_HTTP_PORT`: Service port (7091)Also update any examples referencing 8084.
233-242: Port conflict section references 8084; update to actual ports used (gateway 8080, ctrl 7091).-If ports 8084 or 3306 are in use: +If ports 8080 (Gateway), 7091 (Ctrl), or 3306 (MySQL) are in use: @@ -lsof -i :8084 +lsof -i :8080 +lsof -i :7091 lsof -i :3306
9-11: Fix markdownlint MD034 by wrapping bare URLs.-- **kubectl**: https://kubernetes.io/docs/tasks/tools/ +- **kubectl**: <https://kubernetes.io/docs/tasks/tools/> @@ -- **Tilt**: https://docs.tilt.dev/install.html +- **Tilt**: <https://docs.tilt.dev/install.html> @@ -- Hot reloading for Go code changes -- Unified log viewing in web UI (http://localhost:10350) +- Hot reloading for Go code changes +- Unified log viewing in web UI (<http://localhost:10350>)Repeat similarly for other localhost links.
Also applies to: 19-21, 59-61, 76-81
66-86: Access guidance mixes NodePort and LB; specify which path matches your manifests or prefer Tilt port‑forwards.Clarify “Use Tilt port‑forwards by default; use NodePort on OrbStack only.”
go/k8s/manifests/planetscale.yaml (2)
20-41: Harden container and avoid inline DB password.- name: planetscale-http image: ghcr.io/mattrobenolt/ps-http-sim:v0.0.12 + securityContext: + runAsNonRoot: true + allowPrivilegeEscalation: false + readOnlyRootFilesystem: true + capabilities: { drop: ["ALL"] } + seccompProfile: { type: RuntimeDefault } + resources: + requests: { cpu: "25m", memory: "64Mi" } + limits: { cpu: "200m", memory: "256Mi" } @@ - - name: DATABASE_PASSWORD - value: "password" + - name: DATABASE_PASSWORD + valueFrom: { secretKeyRef: { name: mysql-secrets, key: app_password } }Reuses mysql-secrets from the MySQL comment.
61-68: Service can be ClusterIP locally; Tilt or Dashboard can access it in‑cluster.- type: LoadBalancer + type: ClusterIPgo/Makefile (7)
1-1: Fix .PHONY list and add missing “all” target.
- .PHONY references k8s-status (no rule) and omits several defined rules (k8s-stop, start-clickhouse, start-redis, start-s3, start-planetscale, start-observability, start-api, start-gw, start-dashboard, start-unkey-services, start-dependencies, start-all).
- Static check flags missing required phony “all”.
Apply:
-.PHONY: install tools fmt test-unit test-integration test-integration-long test-stress test build generate pull up clean k8s-check k8s-up k8s-down k8s-reset k8s-status start-mysql start-ctrl start-all dev +.PHONY: all install tools fmt test-unit test-integration test-integration-long test-stress test build generate pull up clean \ + k8s-check k8s-up k8s-down k8s-stop k8s-reset k8s-status k8s-ports \ + start-mysql start-clickhouse start-redis start-s3 start-planetscale start-observability \ + start-api start-gw start-ctrl start-dashboard start-unkey-services start-dependencies start-all devAlso add the “all” target:
all: build
76-80: Improve preflight: show current context and clearer guidance.Echo current context to help users debug cluster selection (Docker Desktop vs kind/minikube/OrbStack).
k8s-check: ## Check if kubectl is available and cluster is running @command -v kubectl >/dev/null 2>&1 || { echo "ERROR: kubectl not found. Install from: https://kubernetes.io/docs/tasks/tools/"; exit 1; } @kubectl cluster-info >/dev/null 2>&1 || { echo "ERROR: Kubernetes cluster not available. Enable Kubernetes in Docker Desktop/OrbStack"; exit 1; } - @echo "Kubernetes cluster is available" + @echo "Kubernetes cluster is available (context: $$(kubectl config current-context))"
110-116: Delete cluster-scoped resources applied from manifests.If RBAC includes ClusterRole/ClusterRoleBinding, they’ll linger after namespace deletion.
k8s-down: ## Delete all services from current Kubernetes cluster @echo "Deleting all services..." + @kubectl delete -f k8s/manifests/ --ignore-not-found=true || true @kubectl delete namespace unkey --ignore-not-found=true @echo "Services deleted"
117-126: Parametrize wait timeout and consider optional second label.Expose timeout via var (done above). If any manifest doesn’t label pods as app=$(1), allow an override label key via a 6th arg later if needed.
Please confirm all manifests use app=$(fileBasename) for pod labels.
131-134: Unify ClickHouse build through the macro for consistency.Minor tidy-up.
-start-clickhouse: k8s-check ## Deploy only ClickHouse - @docker build -t unkey/clickhouse:latest -f ../deployment/Dockerfile.clickhouse ../ - $(call deploy-service,clickhouse,ClickHouse) +start-clickhouse: k8s-check ## Deploy only ClickHouse + $(call deploy-service,clickhouse,ClickHouse,unkey/clickhouse:latest,-f ../deployment/Dockerfile.clickhouse ../)
156-158: Build dashboard image during k8s-up or document expectation.k8s-up doesn’t build unkey/dashboard:latest; if manifests use local image, add a build step there or clarify it uses a remote image.
165-173: Shrink dev recipe to satisfy checkmake maxbodylength.Collapse to one shell line.
dev: ## Start with Tilt (if available) or fallback to k8s-up - @if command -v tilt >/dev/null 2>&1; then \ - echo "Starting with Tilt..."; \ - tilt up; \ - else \ - echo "Tilt not found, using k8s-up instead"; \ - echo "Install Tilt from: https://docs.tilt.dev/install.html"; \ - make k8s-up; \ - fi + @if command -v tilt >/dev/null 2>&1; then echo "Starting with Tilt..."; tilt up; else echo "Tilt not found, using k8s-up instead"; echo "Install Tilt from: https://docs.tilt.dev/install.html"; $(MAKE) k8s-up; fi
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (17)
go/Dockerfile(2 hunks)go/Dockerfile.tilt(1 hunks)go/K8S_DEVELOPMENT.md(1 hunks)go/Makefile(2 hunks)go/Tiltfile(1 hunks)go/k8s/manifests/api.yaml(1 hunks)go/k8s/manifests/clickhouse.yaml(1 hunks)go/k8s/manifests/ctrl.yaml(1 hunks)go/k8s/manifests/dashboard.yaml(1 hunks)go/k8s/manifests/gw.yaml(1 hunks)go/k8s/manifests/mysql.yaml(1 hunks)go/k8s/manifests/namespace.yaml(1 hunks)go/k8s/manifests/observability.yaml(1 hunks)go/k8s/manifests/planetscale.yaml(1 hunks)go/k8s/manifests/rbac.yaml(1 hunks)go/k8s/manifests/redis.yaml(1 hunks)go/k8s/manifests/s3.yaml(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-09-11T08:17:21.409Z
Learnt from: Flo4604
PR: unkeyed/unkey#3944
File: go/apps/ctrl/services/deployment/fallbacks/k8s.go:236-247
Timestamp: 2025-09-11T08:17:21.409Z
Learning: The K8s fallback backend in go/apps/ctrl/services/deployment/fallbacks/k8s.go is specifically designed for scenarios where the ctrl service runs inside a Kubernetes cluster. It should not be used when ctrl runs outside the cluster - in those cases, the Docker fallback should be used instead.
Applied to files:
go/k8s/manifests/ctrl.yaml
📚 Learning: 2025-08-08T15:09:01.312Z
Learnt from: Flo4604
PR: unkeyed/unkey#3753
File: go/apps/api/openapi/config.yaml:9-10
Timestamp: 2025-08-08T15:09:01.312Z
Learning: Repo unkeyed/unkey: In go/apps/api/openapi, oapi-codegen doesn’t support OpenAPI 3.1 union nullability; overlay.yaml must be applied before codegen. The overlay key in oapi-codegen config isn’t supported—use a pre-step (programmatic or CLI) to merge overlay into the bundled spec, then run oapi-codegen.
Applied to files:
go/k8s/manifests/api.yaml
📚 Learning: 2025-09-01T15:10:44.959Z
Learnt from: imeyer
PR: unkeyed/unkey#3899
File: go/proto/metald/v1/deployment.proto:7-15
Timestamp: 2025-09-01T15:10:44.959Z
Learning: In the unkey/unkey repository, the metald service receives deployment_id values from upstream services rather than generating them internally. The deployment_id field in DeploymentRequest is part of a service-to-service communication pattern.
Applied to files:
go/k8s/manifests/api.yaml
🪛 markdownlint-cli2 (0.17.2)
go/K8S_DEVELOPMENT.md
10-10: Bare URL used
(MD034, no-bare-urls)
20-20: Bare URL used
(MD034, no-bare-urls)
59-59: Bare URL used
(MD034, no-bare-urls)
76-76: Bare URL used
(MD034, no-bare-urls)
77-77: Bare URL used
(MD034, no-bare-urls)
78-78: Bare URL used
(MD034, no-bare-urls)
79-79: Bare URL used
(MD034, no-bare-urls)
80-80: Bare URL used
(MD034, no-bare-urls)
81-81: Bare URL used
(MD034, no-bare-urls)
🪛 Checkov (3.2.334)
go/k8s/manifests/s3.yaml
[medium] 15-71: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[medium] 15-71: Minimize the admission of root containers
(CKV_K8S_23)
go/k8s/manifests/mysql.yaml
[medium] 15-75: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[medium] 15-75: Minimize the admission of root containers
(CKV_K8S_23)
go/k8s/manifests/dashboard.yaml
[medium] 2-64: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[medium] 2-64: Minimize the admission of root containers
(CKV_K8S_23)
[medium] 42-43: Basic Auth Credentials
(CKV_SECRET_4)
go/k8s/manifests/gw.yaml
[medium] 2-100: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[medium] 2-100: Minimize the admission of root containers
(CKV_K8S_23)
[medium] 57-58: Basic Auth Credentials
(CKV_SECRET_4)
go/k8s/manifests/ctrl.yaml
[medium] 2-85: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[medium] 2-85: Minimize the admission of root containers
(CKV_K8S_23)
go/k8s/manifests/observability.yaml
[medium] 19-56: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[medium] 19-56: Minimize the admission of root containers
(CKV_K8S_23)
[medium] 74-112: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[medium] 74-112: Minimize the admission of root containers
(CKV_K8S_23)
go/k8s/manifests/redis.yaml
[medium] 2-39: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[medium] 2-39: Minimize the admission of root containers
(CKV_K8S_23)
go/k8s/manifests/planetscale.yaml
[medium] 2-52: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[medium] 2-52: Minimize the admission of root containers
(CKV_K8S_23)
go/k8s/manifests/clickhouse.yaml
[medium] 15-85: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[medium] 15-85: Minimize the admission of root containers
(CKV_K8S_23)
go/k8s/manifests/api.yaml
[medium] 2-95: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[medium] 2-95: Minimize the admission of root containers
(CKV_K8S_23)
[medium] 49-50: Basic Auth Credentials
(CKV_SECRET_4)
🪛 checkmake (0.2.2)
go/Makefile
[warning] 1-1: Missing required phony target "all"
(minphony)
[warning] 165-165: Target body for "dev" exceeds allowed length of 5 (8).
(maxbodylength)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
- GitHub Check: Test Go API Local / Test
- GitHub Check: Test API / API Test Local
- GitHub Check: Build / Build
🔇 Additional comments (12)
go/k8s/manifests/namespace.yaml (1)
1-7: Namespace looks goodConsistent labeling and a dedicated namespace are helpful for scoping dev resources.
go/k8s/manifests/api.yaml (2)
72-85: Probes look fineEndpoints/intervals are reasonable for dev.
85-94: BusyBox nc availability: verify or use a safer waitSome BusyBox builds lack nc. Consider a pure-sh loop using /dev/tcp.
- command: - [ - "sh", - "-c", - "until nc -z mysql 3306 && nc -z clickhouse 8123; do echo waiting for dependencies; sleep 2; done;", - ] + command: + - sh + - -c + - | + while :; do + (echo > /dev/tcp/mysql/3306) 2>/dev/null && (echo > /dev/tcp/clickhouse/8123) 2>/dev/null && break + echo "waiting for dependencies"; sleep 2 + donego/k8s/manifests/gw.yaml (2)
27-33: Disable TLS via env is fine for dev; keep configurableNo action—just confirm Tilt exposes a flag to flip UNKEY_TLS_ENABLED in test runs.
77-89: Probes look goodTargets and periods are reasonable for dev.
go/k8s/manifests/ctrl.yaml (1)
52-57: metald placeholder likely unused with k8s backend; consider removing to avoid confusion.If backend=k8s ignores UNKEY_METALD_ADDRESS, drop these vars in dev.
go/Makefile (6)
72-75: Section header LGTM.
128-130: MySQL target LGTM.Build args and manifest apply via macro look good.
135-137: Redis target LGTM.
138-140: S3 target LGTM.
159-164: Batch targets LGTM.
141-146: Labels confirmed — wait selectors match manifests.
go/k8s/manifests/planetscale.yaml contains "app: planetscale"; go/k8s/manifests/observability.yaml contains "app: otel-collector".
| env: | ||
| # Server Configuration | ||
| - name: UNKEY_HTTP_PORT | ||
| value: "7070" | ||
| - name: UNKEY_LOGS_COLOR | ||
| value: "true" | ||
| - name: UNKEY_TEST_MODE | ||
| value: "false" | ||
| # Instance Identification | ||
| - name: UNKEY_PLATFORM | ||
| value: "kubernetes" | ||
| - name: UNKEY_IMAGE | ||
| value: "unkey:latest" | ||
| - name: UNKEY_REGION | ||
| value: "local" | ||
| - name: UNKEY_INSTANCE_ID | ||
| value: "api-dev" | ||
| # Database Configuration | ||
| - name: UNKEY_DATABASE_PRIMARY | ||
| value: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true" | ||
| # Caching and Storage | ||
| - name: UNKEY_REDIS_URL | ||
| value: "redis://redis:6379" | ||
| - name: UNKEY_CLICKHOUSE_URL | ||
| value: "clickhouse://default:password@clickhouse:9000?secure=false&skip_verify=true" | ||
| # Observability - DISABLED for development | ||
| - name: UNKEY_OTEL | ||
| value: "false" | ||
| - name: UNKEY_PROMETHEUS_PORT | ||
| value: "0" | ||
| # Vault Configuration | ||
| - name: UNKEY_VAULT_MASTER_KEYS | ||
| value: "Ch9rZWtfMmdqMFBJdVhac1NSa0ZhNE5mOWlLSnBHenFPENTt7an5MRogENt9Si6wms4pQ2XIvqNSIgNpaBenJmXgcInhu6Nfv2U=" | ||
| - name: UNKEY_VAULT_S3_URL | ||
| value: "http://s3:3902" | ||
| - name: UNKEY_VAULT_S3_BUCKET | ||
| value: "vault" | ||
| - name: UNKEY_VAULT_S3_ACCESS_KEY_ID | ||
| value: "minio_root_user" | ||
| - name: UNKEY_VAULT_S3_ACCESS_KEY_SECRET | ||
| value: "minio_root_password" | ||
| # ClickHouse Proxy Service Configuration | ||
| - name: UNKEY_CHPROXY_AUTH_TOKEN | ||
| value: "chproxy-test-token-123" | ||
| # Request Body Configuration | ||
| - name: UNKEY_MAX_REQUEST_BODY_SIZE | ||
| value: "10485760" |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Credentials and keys in manifest: move to Secrets
MySQL DSN, ClickHouse password, Vault master keys, S3 creds, and chproxy token are in plain text. Use k8s Secrets and secretKeyRef/envFrom.
Example (pattern):
- - name: UNKEY_DATABASE_PRIMARY
- value: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true"
+ - name: UNKEY_DATABASE_PRIMARY
+ valueFrom:
+ secretKeyRef:
+ name: api-secrets
+ key: database_primary_dsn
...
- - name: UNKEY_VAULT_MASTER_KEYS
- value: "..."
+ - name: UNKEY_VAULT_MASTER_KEYS
+ valueFrom:
+ secretKeyRef:
+ name: api-secrets
+ key: vault_master_keysI can provide a Secret manifest template if helpful.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| env: | |
| # Server Configuration | |
| - name: UNKEY_HTTP_PORT | |
| value: "7070" | |
| - name: UNKEY_LOGS_COLOR | |
| value: "true" | |
| - name: UNKEY_TEST_MODE | |
| value: "false" | |
| # Instance Identification | |
| - name: UNKEY_PLATFORM | |
| value: "kubernetes" | |
| - name: UNKEY_IMAGE | |
| value: "unkey:latest" | |
| - name: UNKEY_REGION | |
| value: "local" | |
| - name: UNKEY_INSTANCE_ID | |
| value: "api-dev" | |
| # Database Configuration | |
| - name: UNKEY_DATABASE_PRIMARY | |
| value: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true" | |
| # Caching and Storage | |
| - name: UNKEY_REDIS_URL | |
| value: "redis://redis:6379" | |
| - name: UNKEY_CLICKHOUSE_URL | |
| value: "clickhouse://default:password@clickhouse:9000?secure=false&skip_verify=true" | |
| # Observability - DISABLED for development | |
| - name: UNKEY_OTEL | |
| value: "false" | |
| - name: UNKEY_PROMETHEUS_PORT | |
| value: "0" | |
| # Vault Configuration | |
| - name: UNKEY_VAULT_MASTER_KEYS | |
| value: "Ch9rZWtfMmdqMFBJdVhac1NSa0ZhNE5mOWlLSnBHenFPENTt7an5MRogENt9Si6wms4pQ2XIvqNSIgNpaBenJmXgcInhu6Nfv2U=" | |
| - name: UNKEY_VAULT_S3_URL | |
| value: "http://s3:3902" | |
| - name: UNKEY_VAULT_S3_BUCKET | |
| value: "vault" | |
| - name: UNKEY_VAULT_S3_ACCESS_KEY_ID | |
| value: "minio_root_user" | |
| - name: UNKEY_VAULT_S3_ACCESS_KEY_SECRET | |
| value: "minio_root_password" | |
| # ClickHouse Proxy Service Configuration | |
| - name: UNKEY_CHPROXY_AUTH_TOKEN | |
| value: "chproxy-test-token-123" | |
| # Request Body Configuration | |
| - name: UNKEY_MAX_REQUEST_BODY_SIZE | |
| value: "10485760" | |
| env: | |
| # Server Configuration | |
| - name: UNKEY_HTTP_PORT | |
| value: "7070" | |
| - name: UNKEY_LOGS_COLOR | |
| value: "true" | |
| - name: UNKEY_TEST_MODE | |
| value: "false" | |
| # Instance Identification | |
| - name: UNKEY_PLATFORM | |
| value: "kubernetes" | |
| - name: UNKEY_IMAGE | |
| value: "unkey:latest" | |
| - name: UNKEY_REGION | |
| value: "local" | |
| - name: UNKEY_INSTANCE_ID | |
| value: "api-dev" | |
| # Database Configuration | |
| - name: UNKEY_DATABASE_PRIMARY | |
| valueFrom: | |
| secretKeyRef: | |
| name: api-secrets | |
| key: database_primary_dsn | |
| # Caching and Storage | |
| - name: UNKEY_REDIS_URL | |
| value: "redis://redis:6379" | |
| - name: UNKEY_CLICKHOUSE_URL | |
| value: "clickhouse://default:password@clickhouse:9000?secure=false&skip_verify=true" | |
| # Observability - DISABLED for development | |
| - name: UNKEY_OTEL | |
| value: "false" | |
| - name: UNKEY_PROMETHEUS_PORT | |
| value: "0" | |
| # Vault Configuration | |
| - name: UNKEY_VAULT_MASTER_KEYS | |
| valueFrom: | |
| secretKeyRef: | |
| name: api-secrets | |
| key: vault_master_keys | |
| - name: UNKEY_VAULT_S3_URL | |
| value: "http://s3:3902" | |
| - name: UNKEY_VAULT_S3_BUCKET | |
| value: "vault" | |
| - name: UNKEY_VAULT_S3_ACCESS_KEY_ID | |
| value: "minio_root_user" | |
| - name: UNKEY_VAULT_S3_ACCESS_KEY_SECRET | |
| value: "minio_root_password" | |
| # ClickHouse Proxy Service Configuration | |
| - name: UNKEY_CHPROXY_AUTH_TOKEN | |
| value: "chproxy-test-token-123" | |
| # Request Body Configuration | |
| - name: UNKEY_MAX_REQUEST_BODY_SIZE | |
| value: "10485760" |
🧰 Tools
🪛 Checkov (3.2.334)
[medium] 49-50: Basic Auth Credentials
(CKV_SECRET_4)
| - name: CLICKHOUSE_ADMIN_USER | ||
| value: "default" | ||
| - name: CLICKHOUSE_ADMIN_PASSWORD | ||
| value: "password" | ||
| - name: ALLOW_EMPTY_PASSWORD | ||
| value: "no" | ||
| volumeMounts: |
There was a problem hiding this comment.
ClickHouse admin credentials are hardcoded. Move to Secret.
- - name: CLICKHOUSE_ADMIN_USER
- value: "default"
- - name: CLICKHOUSE_ADMIN_PASSWORD
- value: "password"
+ - name: CLICKHOUSE_ADMIN_USER
+ valueFrom: { secretKeyRef: { name: clickhouse-secrets, key: admin_user } }
+ - name: CLICKHOUSE_ADMIN_PASSWORD
+ valueFrom: { secretKeyRef: { name: clickhouse-secrets, key: admin_password } }Add Secret (outside this file):
apiVersion: v1
kind: Secret
metadata:
name: clickhouse-secrets
namespace: unkey
type: Opaque
stringData:
admin_user: default
admin_password: password🤖 Prompt for AI Agents
In go/k8s/manifests/clickhouse.yaml around lines 41-47 the ClickHouse admin
credentials are hardcoded; remove the CLICKHOUSE_ADMIN_USER and
CLICKHOUSE_ADMIN_PASSWORD env entries and instead reference a Kubernetes Secret.
Create the suggested Secret named clickhouse-secrets in the unkey namespace (as
provided outside this file) and update the Deployment/Pod spec to use envFrom or
env with valueFrom: secretKeyRef for admin_user and admin_password (and keep
ALLOW_EMPTY_PASSWORD as is), ensuring names match the Secret keys and namespace
is set if needed.
| - name: UNKEY_AUTH_TOKEN | ||
| value: "dev-auth-token" | ||
| - name: UNKEY_METALD_ADDRESS | ||
| value: "http://metald-placeholder:8080" | ||
| - name: UNKEY_SPIFFE_SOCKET_PATH | ||
| value: "/var/lib/spire/agent/agent.sock" | ||
| # Vault Configuration (required) | ||
| - name: UNKEY_VAULT_MASTER_KEYS | ||
| value: "Ch9rZWtfMmdqMFBJdVhac1NSa0ZhNE5mOWlLSnBHenFPENTt7an5MRogENt9Si6wms4pQ2XIvqNSIgNpaBenJmXgcInhu6Nfv2U=" | ||
| - name: UNKEY_VAULT_S3_URL | ||
| value: "http://s3:3902" | ||
| - name: UNKEY_VAULT_S3_BUCKET | ||
| value: "vault" | ||
| - name: UNKEY_VAULT_S3_ACCESS_KEY_ID | ||
| value: "minio_root_user" | ||
| - name: UNKEY_VAULT_S3_ACCESS_KEY_SECRET | ||
| value: "minio_root_password" |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Plaintext secrets (auth token, vault keys, S3 creds) committed in manifest. Move to Kubernetes Secrets.
- - name: UNKEY_AUTH_TOKEN
- value: "dev-auth-token"
+ - name: UNKEY_AUTH_TOKEN
+ valueFrom:
+ secretKeyRef:
+ name: ctrl-secrets
+ key: unkey_auth_token
@@
- - name: UNKEY_VAULT_MASTER_KEYS
- value: "Ch9rZWtfMmdqMFBJdVhac1NSa0ZhNE5mOWlLSnBHenFPENTt7an5MRogENt9Si6wms4pQ2XIvqNSIgNpaBenJmXgcInhu6Nfv2U="
+ - name: UNKEY_VAULT_MASTER_KEYS
+ valueFrom:
+ secretKeyRef:
+ name: ctrl-secrets
+ key: vault_master_keys
@@
- - name: UNKEY_VAULT_S3_ACCESS_KEY_ID
- value: "minio_root_user"
+ - name: UNKEY_VAULT_S3_ACCESS_KEY_ID
+ valueFrom:
+ secretKeyRef:
+ name: ctrl-secrets
+ key: s3_access_key_id
- - name: UNKEY_VAULT_S3_ACCESS_KEY_SECRET
- value: "minio_root_password"
+ - name: UNKEY_VAULT_S3_ACCESS_KEY_SECRET
+ valueFrom:
+ secretKeyRef:
+ name: ctrl-secrets
+ key: s3_access_key_secretSecret to add (outside this file):
apiVersion: v1
kind: Secret
metadata:
name: ctrl-secrets
namespace: unkey
type: Opaque
stringData:
unkey_auth_token: dev-auth-token
vault_master_keys: "<base64-or-json-keys>"
s3_access_key_id: minio_root_user
s3_access_key_secret: minio_root_password| env: | ||
| # Database configuration | ||
| - name: DATABASE_HOST | ||
| value: "planetscale:3900" | ||
| # ClickHouse configuration | ||
| - name: CLICKHOUSE_URL | ||
| value: "http://unkey:password@clickhouse:8123" | ||
| # Environment | ||
| - name: NODE_ENV | ||
| value: "production" | ||
| # Instance identification | ||
| - name: UNKEY_PLATFORM | ||
| value: "kubernetes" | ||
| - name: UNKEY_REGION | ||
| value: "local" |
There was a problem hiding this comment.
🛠️ Refactor suggestion
CLICKHOUSE_URL includes credentials: use Secrets
Avoid embedding passwords in git. Move to Secret and reference via secretKeyRef.
- - name: CLICKHOUSE_URL
- value: "http://unkey:password@clickhouse:8123"
+ - name: CLICKHOUSE_URL
+ valueFrom:
+ secretKeyRef:
+ name: dashboard-secrets
+ key: clickhouse_url📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| env: | |
| # Database configuration | |
| - name: DATABASE_HOST | |
| value: "planetscale:3900" | |
| # ClickHouse configuration | |
| - name: CLICKHOUSE_URL | |
| value: "http://unkey:password@clickhouse:8123" | |
| # Environment | |
| - name: NODE_ENV | |
| value: "production" | |
| # Instance identification | |
| - name: UNKEY_PLATFORM | |
| value: "kubernetes" | |
| - name: UNKEY_REGION | |
| value: "local" | |
| env: | |
| # Database configuration | |
| - name: DATABASE_HOST | |
| value: "planetscale:3900" | |
| # ClickHouse configuration | |
| - name: CLICKHOUSE_URL | |
| valueFrom: | |
| secretKeyRef: | |
| name: dashboard-secrets | |
| key: clickhouse_url | |
| # Environment | |
| - name: NODE_ENV | |
| value: "production" | |
| # Instance identification | |
| - name: UNKEY_PLATFORM | |
| value: "kubernetes" | |
| - name: UNKEY_REGION | |
| value: "local" |
🧰 Tools
🪛 Checkov (3.2.334)
[medium] 42-43: Basic Auth Credentials
(CKV_SECRET_4)
🤖 Prompt for AI Agents
In go/k8s/manifests/dashboard.yaml around lines 36 to 50, the CLICKHOUSE_URL
currently embeds credentials in plaintext; instead create a Kubernetes Secret
holding the ClickHouse credentials (either a full CLICKHOUSE_URL value or
separate CLICKHOUSE_USER and CLICKHOUSE_PASSWORD keys) and update the pod env to
load the value via valueFrom.secretKeyRef. If you choose separate keys, expose
them as env vars and (if the container supports it) let the app assemble the URL
from those secrets, otherwise store the full URL in the Secret and reference it
directly with secretKeyRef for CLICKHOUSE_URL.
| env: | ||
| # Server Configuration | ||
| - name: UNKEY_HTTP_PORT | ||
| value: "8080" | ||
| - name: UNKEY_HTTPS_PORT | ||
| value: "8443" | ||
| - name: UNKEY_TLS_ENABLED | ||
| value: "false" | ||
| # Instance Identification | ||
| - name: UNKEY_PLATFORM | ||
| value: "kubernetes" | ||
| - name: UNKEY_IMAGE | ||
| value: "unkey:latest" | ||
| - name: UNKEY_REGION | ||
| value: "local" | ||
| - name: UNKEY_GATEWAY_ID | ||
| value: "gw-dev" | ||
| - name: UNKEY_DEFAULT_CERT_DOMAIN | ||
| value: "" | ||
| - name: UNKEY_MAIN_DOMAIN | ||
| value: "" | ||
| - name: UNKEY_CTRL_ADDR | ||
| value: "ctrl:7091" | ||
| # Database Configuration - Partitioned (for gateway operations) | ||
| - name: UNKEY_DATABASE_PRIMARY | ||
| value: "unkey:password@tcp(mysql:3306)/partition_001?parseTime=true&interpolateParams=true" | ||
| # Database Configuration - Keys Service | ||
| - name: UNKEY_KEYS_DATABASE_PRIMARY | ||
| value: "unkey:password@tcp(mysql:3306)/unkey?parseTime=true&interpolateParams=true" | ||
| # ClickHouse Configuration | ||
| - name: UNKEY_CLICKHOUSE_URL | ||
| value: "clickhouse://default:password@clickhouse:9000?secure=false&skip_verify=true" | ||
| # Redis Configuration | ||
| - name: UNKEY_REDIS_URL | ||
| value: "redis://redis:6379" | ||
| # Observability - DISABLED for development | ||
| - name: UNKEY_OTEL | ||
| value: "false" | ||
| - name: UNKEY_PROMETHEUS_PORT | ||
| value: "0" | ||
| # Vault Configuration | ||
| - name: UNKEY_VAULT_MASTER_KEYS | ||
| value: "Ch9rZWtfMmdqMFBJdVhac1NSa0ZhNE5mOWlLSnBHenFPENTt7an5MRogENt9Si6wms4pQ2XIvqNSIgNpaBenJmXgcInhu6Nfv2U=" | ||
| - name: UNKEY_VAULT_S3_URL | ||
| value: "http://s3:3902" | ||
| - name: UNKEY_VAULT_S3_BUCKET | ||
| value: "vault" | ||
| - name: UNKEY_VAULT_S3_ACCESS_KEY_ID | ||
| value: "minio_root_user" | ||
| - name: UNKEY_VAULT_S3_ACCESS_KEY_SECRET | ||
| value: "minio_root_password" |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Secrets in env: migrate to k8s Secrets
DSNs, ClickHouse password, Redis URL, Vault keys, S3 creds are embedded. Move to Secret(s) and reference via secretKeyRef/envFrom.
I can draft a gw‑secrets manifest and update env blocks accordingly.
🧰 Tools
🪛 Checkov (3.2.334)
[medium] 57-58: Basic Auth Credentials
(CKV_SECRET_4)
🤖 Prompt for AI Agents
In go/k8s/manifests/gw.yaml around lines 26–76 the env block exposes sensitive
values (DB DSNs, ClickHouse password, Redis URL, Vault master keys and S3 creds)
in plain text; create a new Kubernetes Secret (e.g., gw-secrets) containing
these values (base64-encoded) and replace the literal env values with secret
references using either envFrom: secretRef: name: gw-secrets or individual env
entries using valueFrom: secretKeyRef: name: gw-secrets key:<key-name>; move the
following keys into the Secret: UNKEY_DATABASE_PRIMARY,
UNKEY_KEYS_DATABASE_PRIMARY, UNKEY_CLICKHOUSE_URL, UNKEY_REDIS_URL,
UNKEY_VAULT_MASTER_KEYS, UNKEY_VAULT_S3_ACCESS_KEY_ID,
UNKEY_VAULT_S3_ACCESS_KEY_SECRET (and any other sensitive UNKEY_VAULT_S3_* and
UNKEY_VAULT_S3_BUCKET if sensitive), then remove the plain-text values from
gw.yaml and reference the Secret instead; apply and verify pods mount the env
from the Secret and that RBAC/namespace access restricts Secret retrieval.
| apiVersion: rbac.authorization.k8s.io/v1 | ||
| kind: ClusterRoleBinding | ||
| metadata: | ||
| name: ctrl-deployment-manager-binding | ||
| labels: | ||
| app: ctrl | ||
| subjects: | ||
| - kind: ServiceAccount | ||
| name: ctrl-service-account | ||
| namespace: unkey | ||
| roleRef: | ||
| kind: ClusterRole | ||
| name: ctrl-deployment-manager | ||
| apiGroup: rbac.authorization.k8s.io |
There was a problem hiding this comment.
Use RoleBinding, not ClusterRoleBinding, and bind to the namespaced Role.
-apiVersion: rbac.authorization.k8s.io/v1
-kind: ClusterRoleBinding
+apiVersion: rbac.authorization.k8s.io/v1
+kind: RoleBinding
metadata:
name: ctrl-deployment-manager-binding
labels:
app: ctrl
subjects:
- kind: ServiceAccount
name: ctrl-service-account
namespace: unkey
roleRef:
- kind: ClusterRole
+ kind: Role
name: ctrl-deployment-manager
apiGroup: rbac.authorization.k8s.ioCommittable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In go/k8s/manifests/rbac.yaml around lines 32 to 45, the manifest incorrectly
uses a ClusterRoleBinding to bind a namespaced service account to what should be
a namespaced Role; replace the ClusterRoleBinding with a RoleBinding, add the
metadata.namespace field (namespace: unkey) under metadata, ensure the subject
includes namespace: unkey, and change roleRef.kind from ClusterRole to Role
(keeping roleRef.name: ctrl-deployment-manager and apiGroup:
rbac.authorization.k8s.io) so the binding targets the namespaced Role instead of
a cluster-scoped ClusterRole.
| containers: | ||
| - name: minio | ||
| image: bitnami/minio:2025.7.23 | ||
| ports: | ||
| - containerPort: 9000 | ||
| name: api | ||
| - containerPort: 9001 | ||
| name: console | ||
| env: | ||
| - name: MINIO_ROOT_USER | ||
| value: "minio_root_user" | ||
| - name: MINIO_ROOT_PASSWORD | ||
| value: "minio_root_password" | ||
| - name: MINIO_API_PORT_NUMBER | ||
| value: "9000" | ||
| - name: MINIO_CONSOLE_PORT_NUMBER | ||
| value: "9001" | ||
| - name: MINIO_DEFAULT_BUCKETS | ||
| value: "vault" | ||
| volumeMounts: |
There was a problem hiding this comment.
🛠️ Refactor suggestion
MinIO creds in manifest: move to Secret
Root user/password and default buckets shouldn’t live in git even for dev.
- env:
- - name: MINIO_ROOT_USER
- value: "minio_root_user"
- - name: MINIO_ROOT_PASSWORD
- value: "minio_root_password"
+ env:
+ - name: MINIO_ROOT_USER
+ valueFrom:
+ secretKeyRef: { name: s3-secrets, key: root_user }
+ - name: MINIO_ROOT_PASSWORD
+ valueFrom:
+ secretKeyRef: { name: s3-secrets, key: root_password }
- name: MINIO_API_PORT_NUMBERI can provide an s3‑secrets template with stringData for quick bootstrapping.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| containers: | |
| - name: minio | |
| image: bitnami/minio:2025.7.23 | |
| ports: | |
| - containerPort: 9000 | |
| name: api | |
| - containerPort: 9001 | |
| name: console | |
| env: | |
| - name: MINIO_ROOT_USER | |
| value: "minio_root_user" | |
| - name: MINIO_ROOT_PASSWORD | |
| value: "minio_root_password" | |
| - name: MINIO_API_PORT_NUMBER | |
| value: "9000" | |
| - name: MINIO_CONSOLE_PORT_NUMBER | |
| value: "9001" | |
| - name: MINIO_DEFAULT_BUCKETS | |
| value: "vault" | |
| volumeMounts: | |
| containers: | |
| - name: minio | |
| image: bitnami/minio:2025.7.23 | |
| ports: | |
| - containerPort: 9000 | |
| name: api | |
| - containerPort: 9001 | |
| name: console | |
| env: | |
| - name: MINIO_ROOT_USER | |
| valueFrom: | |
| secretKeyRef: { name: s3-secrets, key: root_user } | |
| - name: MINIO_ROOT_PASSWORD | |
| valueFrom: | |
| secretKeyRef: { name: s3-secrets, key: root_password } | |
| - name: MINIO_API_PORT_NUMBER | |
| value: "9000" | |
| - name: MINIO_CONSOLE_PORT_NUMBER | |
| value: "9001" | |
| - name: MINIO_DEFAULT_BUCKETS | |
| value: "vault" | |
| volumeMounts: |
🤖 Prompt for AI Agents
In go/k8s/manifests/s3.yaml around lines 32 to 51 the MinIO root user/password
and default buckets are hardcoded in the pod spec; move these to a Kubernetes
Secret and reference them via env.valueFrom.secretKeyRef. Create an s3-secrets
manifest (stringData) with keys like MINIO_ROOT_USER, MINIO_ROOT_PASSWORD and
MINIO_DEFAULT_BUCKETS (and any port overrides if needed) for bootstrapping,
remove the plaintext values from the Deployment/StatefulSet, and update each env
entry to use valueFrom.secretKeyRef pointing to the s3-secrets name and
appropriate key. Ensure the Secret is excluded from git or stored in a safe
place.
| k8s-up: k8s-check ## Deploy all services to current Kubernetes cluster | ||
| @echo "Building Docker images..." | ||
| @docker build -t unkey/mysql:latest -f ../deployment/Dockerfile.mysql ../ | ||
| @docker build -t unkey/clickhouse:latest -f ../deployment/Dockerfile.clickhouse ../ | ||
| @docker build -t unkey:latest . | ||
| @echo "Creating namespace..." | ||
| @kubectl apply -f k8s/manifests/namespace.yaml | ||
| @echo "Applying Kubernetes manifests..." | ||
| @kubectl apply -f k8s/manifests/ | ||
| @echo "Waiting for services to be ready..." | ||
| @kubectl wait --for=condition=ready pod -l app=mysql -n unkey --timeout=180s | ||
| @kubectl wait --for=condition=ready pod -l app=clickhouse -n unkey --timeout=180s | ||
| @kubectl wait --for=condition=ready pod -l app=s3 -n unkey --timeout=180s | ||
| @kubectl wait --for=condition=ready pod -l app=api -n unkey --timeout=180s | ||
| @kubectl wait --for=condition=ready pod -l app=gw -n unkey --timeout=180s | ||
| @kubectl wait --for=condition=ready pod -l app=ctrl -n unkey --timeout=180s | ||
| @kubectl wait --for=condition=ready pod -l app=dashboard -n unkey --timeout=180s | ||
| @echo "Kubernetes environment is ready!" | ||
| @echo "" | ||
| @echo "Services accessible via NodePort on localhost - check actual ports with:" | ||
| @echo "" | ||
| @echo "Check NodePort assignments: make k8s-ports" | ||
| @make k8s-status | ||
|
|
There was a problem hiding this comment.
Add waits for all dependencies, configurable timeout, and implement referenced status/ports targets.
- api/gw/ctrl can flap if redis/planetscale/observability aren’t ready.
- Replace hardcoded 180s with K8S_WAIT_TIMEOUT?=.
- You echo “make k8s-ports” and “k8s-status” but only the latter is missing; implement both.
Apply:
+k8s_wait_timeout ?= 300
k8s-up: k8s-check ## Deploy all services to current Kubernetes cluster
@echo "Building Docker images..."
@docker build -t unkey/mysql:latest -f ../deployment/Dockerfile.mysql ../
@docker build -t unkey/clickhouse:latest -f ../deployment/Dockerfile.clickhouse ../
@docker build -t unkey:latest .
@echo "Creating namespace..."
@kubectl apply -f k8s/manifests/namespace.yaml
@echo "Applying Kubernetes manifests..."
@kubectl apply -f k8s/manifests/
@echo "Waiting for services to be ready..."
- @kubectl wait --for=condition=ready pod -l app=mysql -n unkey --timeout=180s
- @kubectl wait --for=condition=ready pod -l app=clickhouse -n unkey --timeout=180s
- @kubectl wait --for=condition=ready pod -l app=s3 -n unkey --timeout=180s
- @kubectl wait --for=condition=ready pod -l app=api -n unkey --timeout=180s
- @kubectl wait --for=condition=ready pod -l app=gw -n unkey --timeout=180s
- @kubectl wait --for=condition=ready pod -l app=ctrl -n unkey --timeout=180s
- @kubectl wait --for=condition=ready pod -l app=dashboard -n unkey --timeout=180s
+ @kubectl wait --for=condition=ready pod -l app=mysql -n unkey --timeout=$(k8s_wait_timeout)s
+ @kubectl wait --for=condition=ready pod -l app=clickhouse -n unkey --timeout=$(k8s_wait_timeout)s
+ @kubectl wait --for=condition=ready pod -l app=redis -n unkey --timeout=$(k8s_wait_timeout)s
+ @kubectl wait --for=condition=ready pod -l app=planetscale -n unkey --timeout=$(k8s_wait_timeout)s || true
+ @kubectl wait --for=condition=ready pod -l app=s3 -n unkey --timeout=$(k8s_wait_timeout)s
+ @kubectl wait --for=condition=ready pod -l app=otel-collector -n unkey --timeout=$(k8s_wait_timeout)s || true
+ @kubectl wait --for=condition=ready pod -l app=api -n unkey --timeout=$(k8s_wait_timeout)s
+ @kubectl wait --for=condition=ready pod -l app=gw -n unkey --timeout=$(k8s_wait_timeout)s
+ @kubectl wait --for=condition=ready pod -l app=ctrl -n unkey --timeout=$(k8s_wait_timeout)s
+ @kubectl wait --for=condition=ready pod -l app=dashboard -n unkey --timeout=$(k8s_wait_timeout)s
@echo "Kubernetes environment is ready!"
@echo ""
@echo "Services accessible via NodePort on localhost - check actual ports with:"
@echo ""
@echo "Check NodePort assignments: make k8s-ports"
- @make k8s-status
+ @$(MAKE) k8s-statusAdd these targets:
k8s-status:
@kubectl get pods -n unkey -o wide
@kubectl get svc -n unkey
k8s-ports:
@kubectl get svc -n unkey -o custom-columns=NAME:.metadata.name,TYPE:.spec.type,PORTS:.spec.ports[*].nodePort,CLUSTER-IP:.spec.clusterIP
🤖 Prompt for AI Agents
In go/Makefile around lines 81 to 104, the k8s-up target uses a hardcoded 180s
timeout and doesn't wait for all dependent services or provide the referenced
helper targets; change the timeout to a configurable variable
K8S_WAIT_TIMEOUT?=180s and replace each kubectl wait --timeout=180s with
--timeout=$(K8S_WAIT_TIMEOUT), add additional kubectl wait lines for redis,
planetscale and observability (or whatever service labels your manifests use) to
ensure api/gw/ctrl dependencies are ready, and implement the missing k8s-status
and k8s-ports phony targets as described so the echoed instructions actually
work.
| @echo "Stopping all pods..." | ||
| @kubectl scale deployment --all --replicas=0 -n unkey | ||
| @echo "All pods stopped" | ||
|
|
||
| k8s-down: ## Delete all services from current Kubernetes cluster |
There was a problem hiding this comment.
Stop also scales StatefulSets (MySQL/ClickHouse likely use them).
Current rule only scales Deployments.
k8s-stop: ## Stop all pods (scale deployments to 0)
@echo "Stopping all pods..."
@kubectl scale deployment --all --replicas=0 -n unkey
+ @kubectl scale statefulset --all --replicas=0 -n unkey
@echo "All pods stopped"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| k8s-stop: ## Stop all pods (scale deployments to 0) | |
| @echo "Stopping all pods..." | |
| @kubectl scale deployment --all --replicas=0 -n unkey | |
| @echo "All pods stopped" | |
| k8s-stop: ## Stop all pods (scale deployments to 0) | |
| @echo "Stopping all pods..." | |
| @kubectl scale deployment --all --replicas=0 -n unkey | |
| @kubectl scale statefulset --all --replicas=0 -n unkey | |
| @echo "All pods stopped" |
🤖 Prompt for AI Agents
In go/Makefile around lines 105 to 109, the k8s-stop target only scales
Deployments and therefore misses StatefulSets (used by MySQL/ClickHouse); update
the target to also scale StatefulSets to 0 by running kubectl scale statefulset
--all --replicas=0 -n unkey in addition to the existing deployment scale command
(preserve the echoes and error behavior).
| $(call deploy-service,api,API,unkey:latest,.) | ||
|
|
||
| start-gw: k8s-check ## Deploy only Gateway service | ||
| $(call deploy-service,gw,Gateway,unkey:latest,.) | ||
|
|
||
| start-ctrl: k8s-check ## Deploy only ctrl service | ||
| $(call deploy-service,ctrl,Control Plane,unkey:latest,.) | ||
|
|
||
| start-dashboard: k8s-check ## Deploy only dashboard service |
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Ensure manifests use the locally built image and sane pull policy.
You build unkey:latest; make sure api/gw/ctrl manifests reference image: unkey:latest with imagePullPolicy: IfNotPresent (or Never for local).
🏁 Script executed:
#!/bin/bash
# Verify images/pullPolicies for api/gw/ctrl
rg -nC2 -g 'k8s/manifests/*.yaml' -e 'name:\s*(api|gw|ctrl)' -e 'image:\s*unkey:latest' -e 'imagePullPolicy:'Length of output: 148
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "PWD: $(pwd)"
echo
echo "Searching for 'deploy-service' usages..."
rg -n --hidden -S "deploy-service" || true
echo
echo "Listing k8s/manifests directories if they exist..."
[ -d k8s ] && ls -la k8s || echo "no k8s dir"
[ -d k8s/manifests ] && ls -la k8s/manifests || echo "no k8s/manifests dir"
echo
echo "Searching repo for image references to 'unkey' and for imagePullPolicy (with context)..."
rg -n --hidden -S -C2 -E 'image:\s*.*unkey(:latest)?|imagePullPolicy:|name:\s*(api|gw|ctrl)' || true
echo
echo "Listing all YAML files found (first 200 lines) for manual inspection:"
rg --hidden --files -g '*.yml' -g '*.yaml' -S | sed -n '1,200p' || trueLength of output: 15270
🏁 Script executed:
#!/bin/bash
set -euo pipefail
echo "=== Show deploy-service macro from go/Makefile ==="
awk 'BEGIN{p=0} /define deploy-service/{p=1} p{print NR": "$0} /endef/{if(p==1){print NR": "$0; exit}}' go/Makefile || true
echo
echo "=== Show Makefile lines around start-api/start-gw/start-ctrl ==="
sed -n '110,170p' go/Makefile || true
echo
for f in go/k8s/manifests/api.yaml go/k8s/manifests/gw.yaml go/k8s/manifests/ctrl.yaml; do
echo "=== Grep relevant keys in $f ==="
rg -n -C1 -E '^\s*(image:|imagePullPolicy:|name:|containers:)' "$f" || true
echo
echo "=== Full file ($f) head (first 200 lines) ==="
sed -n '1,200p' "$f" || true
echo
done
echo "=== Search repo for any 'image: .*unkey' or 'imagePullPolicy' occurrences ==="
rg -n --hidden -S -E 'image:\s*.*unkey(:latest)?|imagePullPolicy:' || trueLength of output: 14386
Fix image-tag mismatch: make built image tags match k8s manifests (or update manifests to match builds).
Makefile start-api/start-gw/start-ctrl build/tag unkey:latest, but go/k8s/manifests/api.yaml, gw.yaml, ctrl.yaml reference unkey-api:latest / unkey-gw:latest / unkey-ctrl:latest (they already set imagePullPolicy: Never). Action: either change go/Makefile to build/tag the per-service images (recommended — e.g., build unkey-api:latest, unkey-gw:latest, unkey-ctrl:latest) or update the manifests to use unkey:latest and set imagePullPolicy to IfNotPresent (or keep Never for local). Files: go/Makefile (start-api/start-gw/start-ctrl), go/k8s/manifests/{api,gw,ctrl}.yaml.
🤖 Prompt for AI Agents
In go/Makefile around lines 147-155 the start-api/start-gw/start-ctrl targets
currently reference images tagged as unkey:latest but the k8s manifests in
go/k8s/manifests/api.yaml, gw.yaml, ctrl.yaml expect unkey-api:latest,
unkey-gw:latest, unkey-ctrl:latest; update the Makefile so the start-* targets
build and tag the service-specific images (e.g., unkey-api:latest,
unkey-gw:latest, unkey-ctrl:latest) instead of unkey:latest, or alternatively
update the three manifest files to reference unkey:latest (and set
imagePullPolicy appropriately); preferred fix: change the Makefile to produce
per-service tags that match the manifests and keep imagePullPolicy as currently
configured in the manifests.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (8)
go/K8S_DEVELOPMENT.md (8)
7-9: Clarify prerequisites: add Tilt, versions, and resource guidance.List required versions and minimal cluster resources; devs regularly hit env drift here.
- Docker Desktop with Kubernetes enabled OR OrbStack with Kubernetes enabled - kubectl + - Tilt (for hot reload): install and verify availability + +Recommended versions and checks: +```bash +kubectl version --short +tilt version +docker version +kubectl config current-context +``` +Minimum resources (local cluster): 4 CPU, 8 GiB RAM.
10-13: Explain what “make k8s-check” validates.Documenting checks and expected output reduces setup confusion.
Check requirements: ```bash make k8s-check+This validates: kubectl context reachability, namespace permissions, Docker daemon availability, and (optionally) Tilt presence. Document any failures and remediation.
--- `17-25`: **Add readiness/wait steps after bring-up.** Ensure users don’t interact before pods are ready. ```diff Start everything: ```bash make k8s-up+Watch readiness:
+bash +kubectl -n unkey get pods -w +# or wait up to 2m for all pods +kubectl -n unkey wait --for=condition=Ready pod --all --timeout=120s +Start with hot reloading (requires Tilt):
make dev--- `26-26`: **Add an “Accessing services” section with safe port-forwarding (avoid 8080 conflicts).** Right now it’s unclear how to hit API/GW locally. ```diff +## Accessing Services + +Use port-forwarding (avoids host port collisions): +```bash +# Gateway (local 18080 -> service 8080) +kubectl -n unkey port-forward svc/gw 18080:8080 +# API (adjust if different) +kubectl -n unkey port-forward svc/api 18081:8080 +``` +If a port is in use: +```bash +lsof -nP -i :8080 +``` +Update the left-hand local port (e.g., 18080) instead of killing other processes.
27-37: Call out service dependencies.Note which infra must be up before app services to prevent crashloops.
## Individual Services ```bash make start-mysql make start-clickhouse make start-redis make start-s3 make start-api make start-gw make start-ctrl+Notes:
+- Start infra (mysql, redis, clickhouse, s3) before app services (api, gw, ctrl).
+- Commands are idempotent; re-running updates or ensures the deployment.--- `53-65`: **Tilt section: add install hint and basic usage tips.** Small additions smooth first-time usage. ```diff ## Tilt (Optional) +Install (macOS): +```bash +brew install tilt-dev/tap/tilt +``` +Verify: +```bash +tilt version +``` + Start specific services: ```bash tilt up -- --services=mysql --services=clickhouse tilt up -- --services=api --services=gw --services=ctrl tilt up -- --services=allStop Tilt:
tilt down+Tip: Press 'q' in the Tilt UI to quit; logs are also available via
kubectl -n unkey logs.--- `65-65`: **Add a short troubleshooting section (port conflicts, logs, describe).** Preempt the common local issues. ```diff +## Troubleshooting + +- Port in use: prefer changing the local port in `kubectl port-forward` (e.g., 18080:8080) instead of killing other processes. +- Inspect failures: + - `kubectl -n unkey get pods` + - `kubectl -n unkey describe pod <name>` + - `kubectl -n unkey logs deploy/<api|gw|ctrl>` +- Stuck images: ensure Docker is running and retry `make k8s-up`.
1-65: Optional: add a brief “Secrets and Env” note.Minimize friction and avoid committing secrets.
+# Secrets and Environment + +Provide a `.env.local` (not committed) and create a K8s Secret: +```bash +kubectl -n unkey create secret generic unkey-env --from-env-file=.env.local +``` +Reference this secret in deployments for API keys, DB creds, and S3/MinIO access.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
go/K8S_DEVELOPMENT.md(1 hunks)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: Analyze (javascript-typescript)
| ## Management | ||
|
|
||
| ```bash | ||
| # Stop everything | ||
| make k8s-down | ||
|
|
||
| # Reset environment | ||
| make k8s-reset | ||
|
|
||
| # View services | ||
| kubectl get pods -n unkey | ||
| kubectl get services -n unkey | ||
| ``` |
There was a problem hiding this comment.
Document ‘k8s-status’ target and warn about destructive reset.
Expose status helper and add a data-loss warning for reset.
## Management
```bash
# Stop everything
make k8s-down
# Reset environment
make k8s-reset
-# View services
+# Status and services
+make k8s-status
kubectl get pods -n unkey
kubectl get services -n unkey+Warning: make k8s-reset may delete persistent volumes and data (mysql/clickhouse/minio). Back up data or use ephemeral test DBs before running.
<details>
<summary>🤖 Prompt for AI Agents</summary>
In go/K8S_DEVELOPMENT.md around lines 39 to 51, the Make targets section should
expose a status helper and warn about data loss from reset: add a new "make
k8s-status" entry under a renamed subsection (e.g., "Status and services") and
insert a clear warning that "make k8s-reset" may delete persistent volumes and
data (mysql/clickhouse/minio), advising to back up data or use ephemeral test
DBs before running; ensure the new lines appear before the kubectl commands and
format the warning as a separate paragraph or block for visibility.
</details>
<!-- fingerprinting:phantom:triton:chinchilla -->
<!-- This is an auto-generated comment by CodeRabbit -->
What does this PR do?
don't mind me
If there is not an issue for this, please create one first. This is used to tracking purposes and also helps use understand why this PR exists
Type of change
How should this be tested?
Checklist
Required
pnpm buildpnpm fmtconsole.logsgit pull origin mainAppreciated
Summary by CodeRabbit
New Features
Documentation
Chores