Skip to content

feat: promote a deployment#4015

Merged
chronark merged 10 commits intomainfrom
09-23-feat_promote_a_deployment
Sep 24, 2025
Merged

feat: promote a deployment#4015
chronark merged 10 commits intomainfrom
09-23-feat_promote_a_deployment

Conversation

@chronark
Copy link
Collaborator

@chronark chronark commented Sep 23, 2025

What does this PR do?

https://app.cushion.so/unkey/post/post_YUjPDdJRarA4nP8QWbKrr

This PR adds a new Promotion feature to deployments, allowing users to promote a target deployment to live status. The implementation includes:

  1. A new PromotionDialog component that displays information about the current live deployment and the target deployment
  2. Backend support for the promotion operation via a new TRPC endpoint
  3. UI integration in the deployments table with a new "Promote" action
  4. Domain list display in the deployments table to show which domains are associated with each deployment
  5. Schema updates to support tracking rolled back deployment IDs

This feature complements the existing rollback functionality, giving users more control over their deployment workflow.

Fixes #

Type of change

  • New feature (non-breaking change which adds functionality)

How should this be tested?

  • Create multiple deployments for a project
  • Verify that the "Promote" option appears in the deployment actions menu for eligible deployments
  • Test promoting a deployment and verify that domains are correctly reassigned
  • Verify that the domain list in the deployments table shows the correct domains for each deployment
  • Test the interaction between rollback and promotion features

Checklist

Required

  • Filled out the "How to test" section in this PR
  • Read Contributing Guide
  • Self-reviewed my own code
  • Commented on my code in hard-to-understand areas
  • Ran pnpm build
  • Ran pnpm fmt
  • Checked for warnings, there are none
  • Removed all console.logs
  • Merged the latest changes from main onto my branch with git pull origin main
  • My changes don't cause any responsiveness issues

Appreciated

  • If a UI change was made: Added a screen recording or screenshots to this PR
  • Updated the Unkey Docs if changes were necessary

Summary by CodeRabbit

  • New Features

    • Added a Promote workflow to switch the live deployment to a selected ready deployment, with confirmations and toasts.
    • Introduced a Promotion dialog from the deployments table actions.
    • Added a Domains column showing bound domains per deployment.
    • Audit logs now include deployment promotion events.
  • Improvements

    • Refined deployments table: clearer Live/Rolled Back status, updated styling, and adjusted column widths for readability.
    • Rollback and Promote actions respect environment and deployment readiness, with proper enable/disable states.
  • Bug Fixes

    • Rollback now refreshes related data consistently after completion.

@changeset-bot
Copy link

changeset-bot bot commented Sep 23, 2025

⚠️ No Changeset found

Latest commit: 74b76c1

Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.

This PR includes no changesets

When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types

Click here to learn what changesets are, and how to add one.

Click here if you're a maintainer who wants to add a changeset to this PR

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Sep 23, 2025

📝 Walkthrough

Walkthrough

Adds deployment promotion across UI, API, and control plane. Introduces PromotionDialog and table actions/UI, a new tRPC promote mutation, CTRL Promote RPC/handler, domain/environment query additions, schema changes (isRolledBack boolean, domains.environment_id), rollback flow adjustments, project list/type updates, and related SQL/proto/codegen updates.

Changes

Cohort / File(s) Summary
Dashboard UI: dialogs and actions
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/promotion-dialog.tsx, .../rollback-dialog.tsx, .../table/components/actions/deployment-list-table-action.popover.constants.tsx
Add PromotionDialog; wire Promote/Rollback actions with guards and modals; update rollback dialog domain query and add extra refetch; remove generated menu items approach.
Dashboard UI: table and utils
.../table/components/domain_list.tsx, .../table/deployments-list.tsx, .../table/utils/get-row-class.ts, .../hooks/use-deployments.ts
New DomainList component; add Domains column; adjust widths and row status/class computation to use liveDeploymentId + isRolledBack; minor whitespace tweak.
Dashboard UI: project creation
apps/dashboard/app/(app)/projects/_components/create-project/create-project-dialog.tsx
Replace rolledBackDeploymentId:null with isRolledBack:false in new project payload.
Types/collections
apps/dashboard/lib/collections/deploy/projects.ts, apps/dashboard/lib/trpc/routers/deploy/project/list.ts
Project shape change: remove rolledBackDeploymentId; add isRolledBack boolean; update SQL mapping and public type exposure.
tRPC: deployment routes
apps/dashboard/lib/trpc/routers/deploy/deployment/promote.ts, .../deployment/rollback.ts, apps/dashboard/lib/trpc/routers/index.ts
Add promote mutation (validations, CTRL call, audit, empty response); change rollback to return empty object; expose promote in router.
Control plane service (Go)
go/apps/ctrl/services/deployment/promote.go, .../deployment/rollback.go, .../deployment/deploy_workflow.go
Add Promote handler (validate, fetch domains, upsert gateway, reassign, update project, clear isRolledBack); adjust rollback to new queries/params and empty response; include environment_id usage and switch to project.IsRolledBack semantics.
DB schema and queries (Go/sqlc)
go/pkg/db/schema.sql, .../models_generated.go, .../project_find_by_id.sql_generated.go, .../project_update_deployments.sql_generated.go, .../queries/project_find_by_id.sql, .../queries/project_update_deployments.sql, .../domain_insert.sql_generated.go, .../bulk_domain_insert.sql_generated.go, .../domain_find_by_deployment_id.sql_generated.go, .../domain_find_for_promotion.sql_generated.go, .../domain_find_for_rollback.sql_generated.go, .../domain_reassign.sql_generated.go, .../querier_generated.go, .../queries/domain_insert.sql, .../queries/domain_find_by_deployment_id.sql, .../queries/domain_find_for_promotion.sql, .../queries/domain_find_for_rollback.sql, .../queries/domain_reassign.sql
Schema: add domains.environment_id; remove domains.is_rolled_back; replace projects.rolled_back_deployment_id with projects.is_rolled_back. Add find-for-promotion/rollback queries and generated types/methods; update inserts/reassign; remove is_rolled_back fields from domain selects; update project find/update to is_rolled_back.
Proto
go/proto/ctrl/v1/deployment.proto
Add Promote RPC with request/response; change RollbackResponse to empty message.
Internal DB schema (TS)
internal/db/src/schema/domains.ts, internal/db/src/schema/projects.ts
Mirror schema changes: domains add environmentId, remove isRolledBack; projects remove rolledBackDeploymentId, add isRolledBack boolean.
Audit schema
internal/schema/src/auditlog.ts
Add deployment.promote to audit event enum.
Scripts
deploy.bash
New script to launch 100 concurrent Go deploy commands with production env.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant UI as Dashboard UI
  participant TRPC as tRPC (dashboard)
  participant CTRL as CTRL DeploymentService
  participant DB as DB (projects/domains)
  participant GW as Gateway Config
  participant Audit as Audit Logs

  User->>UI: Click "Promote"
  UI->>TRPC: mutation.deploy.promote(targetDeploymentId)
  TRPC->>DB: Verify target deployment + project (workspace)
  DB-->>TRPC: Target(ready), Project(liveDeploymentId)
  TRPC->>CTRL: Promote(targetDeploymentId)
  CTRL->>DB: Load Deployment, Project
  DB-->>CTRL: Records
  CTRL->>DB: FindDomainsForPromotion(envId, sticky in [environment, live])
  DB-->>CTRL: Domains
  CTRL->>GW: Upsert gateway entries for domains
  GW-->>CTRL: OK
  CTRL->>DB: Reassign domains -> deploymentId
  DB-->>CTRL: OK
  CTRL->>DB: Update Project(liveDeploymentId=target, isRolledBack=false)
  DB-->>CTRL: OK
  CTRL-->>TRPC: PromoteResponse {}
  TRPC->>Audit: insert "deployment.promote"
  TRPC-->>UI: {}
  UI->>UI: Invalidate queries, show toast
Loading
sequenceDiagram
  autonumber
  actor User
  participant UI as Dashboard UI
  participant TRPC as tRPC (dashboard)
  participant CTRL as CTRL DeploymentService
  participant DB as DB

  User->>UI: Click "Rollback"
  UI->>TRPC: mutation.deploy.rollback()
  TRPC->>CTRL: Rollback()
  CTRL->>DB: FindDomainsForRollback(envId, sticky in [environment, live])
  DB-->>CTRL: Domains
  CTRL->>DB: Reassign domains -> previous deployment
  DB-->>CTRL: OK
  CTRL->>DB: Update Project(isRolledBack=true, liveDeploymentId stays)
  DB-->>CTRL: OK
  CTRL-->>TRPC: RollbackResponse {}
  TRPC-->>UI: {}
  UI->>UI: Refetch deployments/domains/projects
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120–180 minutes

Possibly related PRs

Suggested labels

Core Team, UI

Suggested reviewers

  • imeyer
  • mcstepp
  • ogzhanolguncu
  • perkinsjr
  • Flo4604
  • MichaelUnkey

Pre-merge checks and finishing touches

✅ Passed checks (3 passed)
Check name Status Explanation
Title Check ✅ Passed The title "feat: promote a deployment" is a concise, single-sentence summary that accurately captures the primary change—adding promotion functionality across the frontend and backend (PromotionDialog, TRPC promote endpoint, proto and service changes). It is clear, specific, and suitable for a teammate scanning commit history.
Description Check ✅ Passed The PR description follows the repository template in large part: it includes a clear summary, marks the change as a new feature, provides a How to test section with steps, and shows a completed checklist. However the Fixes/issue field is left empty ("Fixes #") and should reference an issue or explicitly state that none exists, and the schema note could be clarified to state that rolledBackDeploymentId was replaced by an isRolledBack boolean and that domains gained an environment_id column. Once the issue reference is added and the schema change wording clarified (including any migration notes if applicable), the description will be fully compliant with the template.
Docstring Coverage ✅ Passed No functions found in the changes. Docstring coverage check skipped.
✨ Finishing touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch 09-23-feat_promote_a_deployment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@chronark chronark mentioned this pull request Sep 23, 2025
15 tasks
Copy link
Collaborator Author

chronark commented Sep 23, 2025

This stack of pull requests is managed by Graphite. Learn more about stacking.

@chronark chronark marked this pull request as ready for review September 23, 2025 13:25
@ogzhanolguncu
Copy link
Contributor

On it

@chronark chronark force-pushed the 09-23-feat_promote_a_deployment branch from 26410ea to f843e39 Compare September 23, 2025 13:29
@github-actions
Copy link
Contributor

github-actions bot commented Sep 23, 2025

Thank you for following the naming conventions for pull request titles! 🙏

@ogzhanolguncu
Copy link
Contributor

image not sure if this is intentional but this looks a bit off.

@ogzhanolguncu
Copy link
Contributor

ogzhanolguncu commented Sep 23, 2025

Screen.Recording.2025-09-23.at.17.39.10.mov

I cannot rollback again after I rolled back one. We should either not let that happen(rolling back twice) or disable buttons with tooltips so we don't confuse them.

I think this scenario should behave like "don't let new deployments go to live without "promoting first" we are already rolledback". This is already the case for new deployments. New deployments are not flagged as "live" until we promote it.

Copy link
Contributor

@ogzhanolguncu ogzhanolguncu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Other than my few comments all looks great

@chronark
Copy link
Collaborator Author

Screen.Recording.2025-09-23.at.17.39.10.mov
I cannot rollback again after I rolled back one. We should either not let that happen(rolling back twice) or disable buttons with tooltips so we don't confuse them.

I think this scenario should behave like "don't let new deployments go to live without "promoting first" we are already rolledback". This is already the case for new deployments. New deployments are not flagged as "live" until we promote it.

can you show me what's in your domains table in the db?
screenshot is fine

@vercel
Copy link

vercel bot commented Sep 23, 2025

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Preview Comments Updated (UTC)
dashboard Ready Ready Preview Comment Sep 24, 2025 2:11pm
1 Skipped Deployment
Project Deployment Preview Comments Updated (UTC)
engineering Ignored Ignored Preview Sep 24, 2025 2:11pm

@chronark
Copy link
Collaborator Author

image not sure if this is intentional but this looks a bit off.

wdym?

@chronark chronark force-pushed the 09-23-feat_promote_a_deployment branch from 7af0d67 to e1e4560 Compare September 23, 2025 15:12
Copy link
Contributor

@ogzhanolguncu ogzhanolguncu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll take care any UI issue that comes up. I already covered some of them in my gateway-logs branch anyway. I'll fix domains revalidations etc...

@chronark
Copy link
Collaborator Author

thank you, let's get this one merged

Base automatically changed from 09-23-fix_multiple_rolled_back_states to main September 24, 2025 14:03
@chronark chronark merged commit e075fa7 into main Sep 24, 2025
18 of 19 checks passed
@chronark chronark deleted the 09-23-feat_promote_a_deployment branch September 24, 2025 14:14
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 10

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
apps/dashboard/lib/trpc/routers/deploy/deployment/rollback.ts (1)

82-93: Fix error handling: message must be a string; avoid nested .catch with try/catch.

Using message: err can leak objects and violates expected type; let the outer try/catch handle it.

-      await ctrl
-        .rollback({
-          sourceDeploymentId: targetDeployment.project.liveDeploymentId,
-          targetDeploymentId: targetDeployment.id,
-        })
-        .catch((err) => {
-          console.error(err);
-          throw new TRPCError({
-            code: "INTERNAL_SERVER_ERROR",
-            message: err,
-          });
-        });
+      await ctrl.rollback({
+        sourceDeploymentId: targetDeployment.project.liveDeploymentId,
+        targetDeploymentId: targetDeployment.id,
+      });

And in the catch below, coerce to string:

-      console.error("Rollback request failed:", error);
-      throw new TRPCError({
-        code: "INTERNAL_SERVER_ERROR",
-        message: "Failed to communicate with control service",
-      });
+      console.error("Rollback request failed:", error);
+      throw new TRPCError({
+        code: "INTERNAL_SERVER_ERROR",
+        message: error instanceof Error ? error.message : "Failed to communicate with control service",
+      });
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/rollback-dialog.tsx (1)

42-45: Use per-project collections for refetch — stop calling collection.deployments/domains

Top-level collection only exposes projects; collection.deployments/collection.domains are undefined and the refetch calls throw. Replace with the per‑project collections from collectionManager.getProjectCollections(projectId) and call their utils.refetch (use optional chaining). Files: apps/dashboard/app/(app)/projects/[projectId]/deployments/components/rollback-dialog.tsx and promotion-dialog.tsx.

-  const domainCollection = collectionManager.getProjectCollections(
-    liveDeployment.projectId,
-  ).domains;
+  const projectCollections = collectionManager.getProjectCollections(liveDeployment.projectId);
+  const domainCollection = projectCollections.domains;
@@
-        // @ts-expect-error Their docs say it's here
-        collection.deployments.utils.refetch();
-        // @ts-expect-error Their docs say it's here
-        collection.domains.utils.refetch();
+        projectCollections.deployments?.utils?.refetch?.();
+        projectCollections.domains?.utils?.refetch?.();
go/apps/ctrl/services/deployment/deploy_workflow.go (1)

293-307: Wrong error variable used on insert failure.

insertError is checked, but err is returned, leading to misleading errors.

Apply:

-                    if insertError != nil {
-                        return fmt.Errorf("failed to create domain entry for deployment %s: %w", req.DeploymentID, err)
-                    }
+                    if insertError != nil {
+                        return fmt.Errorf("failed to create domain entry for deployment %s: %w", req.DeploymentID, insertError)
+                    }
🧹 Nitpick comments (24)
internal/schema/src/auditlog.ts (1)

73-74: Type the audit log event to the enum to prevent invalid values.

Use the declared enum instead of string to catch typos at runtime.

-  event: z.string(),
+  event: unkeyAuditLogEvents,
deploy.bash (1)

1-16: Make the script safer: kill children on exit, limit concurrency, remove hardcoded IDs.

Current script can flood a machine (100 go run) and leaves children on SIGINT; also contains hardcoded IDs. Build once and rate‑limit.

-#!/bin/bash
-
-for i in {1..100}; do
-  go run . deploy \
-    --context=./demo_api \
-    --workspace-id="ws_4pqNeih1b62QPv6g" \
-    --project-id="proj_4pqPKyhJ3b9fCnuH" \
-    --control-plane-url="http://localhost:7091" \
-    --env=production &
-    sleep 1
-done
-
-wait
+#!/usr/bin/env bash
+set -euo pipefail
+trap 'jobs -pr | xargs -r kill; wait || true' EXIT INT TERM
+
+CONCURRENCY="${CONCURRENCY:-10}"
+ITERATIONS="${ITERATIONS:-100}"
+CONTEXT="${CONTEXT:-./demo_api}"
+WORKSPACE_ID="${WORKSPACE_ID:?set WORKSPACE_ID}"
+PROJECT_ID="${PROJECT_ID:?set PROJECT_ID}"
+CTRL_URL="${CTRL_URL:-http://localhost:7091}"
+ENVIRONMENT="${ENVIRONMENT:-production}"
+
+# Build once to avoid recompiling 100x
+GO_BIN="${GO_BIN:-./unkeyctl}"
+if [ ! -x "$GO_BIN" ]; then
+  go build -o "$GO_BIN" .
+fi
+
+running=0
+for _ in $(seq "$ITERATIONS"); do
+  "$GO_BIN" deploy \
+    --context="$CONTEXT" \
+    --workspace-id="$WORKSPACE_ID" \
+    --project-id="$PROJECT_ID" \
+    --control-plane-url="$CTRL_URL" \
+    --env="$ENVIRONMENT" &
+  running=$((running+1))
+  if [ "$running" -ge "$CONCURRENCY" ]; then
+    wait -n
+    running=$((running-1))
+  fi
+  sleep 0.2
+done
+wait
internal/db/src/schema/projects.ts (2)

22-23: DB schema switch to boolean is_rolled_back: LGTM; ensure migration/backfill.

Provide a migration that backfills based on old column and drops it safely.

I can draft a Drizzle SQL migration like:

  • add column is_rolled_back TINYINT(1) NOT NULL DEFAULT 0
  • update projects set is_rolled_back = (rolled_back_deployment_id IS NOT NULL)
  • drop column rolled_back_deployment_id
    Do you want me to open a migration PR?

39-44: Naming consistency (optional): activeDeployment vs liveDeployment.

Consider renaming relation activeDeployment to liveDeployment for consistency with liveDeploymentId.

apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/domain_list.tsx (1)

25-29: Render clickable, accessible domain links (and consider tighter spacing).

Use anchors for navigation and reduce vertical padding to avoid row height inflation.

-  return (
-    <ul className="flex flex-col list-none py-2">
-      {domains.data.map((domain) => (
-        <li key={domain.id}>https://{domain.domain}</li>
-      ))}
-    </ul>
-  );
+  return (
+    <ul className="flex flex-col list-none py-1">
+      {domains.data.map((d) => (
+        <li key={d.id}>
+          <a
+            href={`https://${d.domain}`}
+            target="_blank"
+            rel="noopener noreferrer"
+            className="text-primary hover:underline"
+          >
+            https://{d.domain}
+          </a>
+        </li>
+      ))}
+    </ul>
+  );
go/apps/ctrl/services/deployment/rollback.go (3)

112-118: Log the environment_id instead of deployment_id for domain lookup failures.

The query filters by environment_id; logging deployment_id here is misleading.

-    s.logger.Error("failed to get domains",
-      "deployment_id", sourceDeployment.ID,
-      "error", err.Error(),
-    )
+    s.logger.Error("failed to get domains",
+      "environment_id", sourceDeployment.EnvironmentID,
+      "error", err.Error(),
+    )

164-173: Prefer transactional updates for s.db operations to avoid partial state.

Upserts to the gateway happen in the partition DB, then per-domain updates and finally the project update in the primary DB. If any domain update fails, the project’s live_deployment_id could still be updated (or vice‑versa) under some refactors. Wrap the s.db ReassignDomain loop and the UpdateProjectDeployments call in a single transaction to keep primary-DB state consistent. Cross-DB atomicity isn’t required; prioritize consistency within s.db.


167-167: Log domain identifiers instead of dumping the struct.

Structured logs are more useful with stable identifiers and avoid noisy struct dumps.

-    s.logger.Info("rolling back domain", "domain", change)
+    s.logger.Info("rolling back domain",
+      "domain_id", change.ID,
+      "deployment_id", change.DeploymentID.String,
+    )
go/pkg/db/queries/domain_find_by_deployment_id.sql (1)

2-14: Include environment_id in the result to match the current Domain shape and reduce follow-up queries.

Callers often need environment context; adding it keeps this query aligned with the updated schema.

 SELECT
     id,
     workspace_id,
     project_id,
+    environment_id,
     domain,
     deployment_id,
     sticky,
     created_at,
     updated_at
 FROM domains
 WHERE deployment_id = sqlc.arg(deployment_id)
 ORDER BY created_at ASC;
go/pkg/db/schema.sql (3)

317-317: Project rollback state change acknowledged; ensure migrations are in place.

Replacing rolled_back_deployment_id with is_rolled_back requires a data migration (e.g., set is_rolled_back=true where rolled_back_deployment_id was non‑NULL, then drop the old column). Confirm this is covered in migrations.


371-371: Backfill domains.environment_id for existing rows.

The new environment_id column enables environment-scoped queries (promotion/rollback). Ensure a backfill sets environment_id for existing domains by joining via deployments(deployment_id→environment_id); otherwise new queries may miss legacy rows.


418-423: Add indexes to support environment-scoped domain queries.

FindDomainsForRollback/Promotion filter by environment_id (and sticky). Add covering indexes to avoid full scans.

 CREATE INDEX `workspace_idx` ON `domains` (`workspace_id`);
 CREATE INDEX `project_idx` ON `domains` (`project_id`);
 CREATE INDEX `deployment_idx` ON `domains` (`deployment_id`);
+CREATE INDEX `environment_idx` ON `domains` (`environment_id`);
+CREATE INDEX `environment_sticky_idx` ON `domains` (`environment_id`,`sticky`);

Would you like me to open a follow-up PR to adjust the schema source (internal/db/src/schema) so these indexes are generated?

go/pkg/db/queries/domain_reassign.sql (1)

3-7: Reassign should consider environment_id consistency

If a domain is environment-sticky and the target deployment belongs to a different environment, not updating environment_id risks mismatches between environment_id and deployment_id.

  • Confirm that ReassignDomain is only used when the environment does not change, or extend the SET clause to also update environment_id when appropriate.
internal/db/src/schema/domains.ts (1)

31-35: Add index on environmentId to support new queries

FindDomainsForPromotion filters by environment_id; indexing it will prevent full scans at scale.

   (table) => ({
     workspaceIdx: index("workspace_idx").on(table.workspaceId),
     projectIdx: index("project_idx").on(table.projectId),
+    environmentIdx: index("environment_idx").on(table.environmentId),
     deploymentIdx: index("deployment_idx").on(table.deploymentId),
     uniqueDomainIdx: uniqueIndex("unique_domain_idx").on(table.domain),
   }),
go/pkg/db/queries/domain_find_for_promotion.sql (1)

12-16: Verify environment_id filter handles live-sticky domains and multi-tenant safety

  • If “live” sticky domains have NULL environment_id, equality filtering will exclude them. Ensure they either have a non-NULL environment_id or handle NULLs separately (e.g., a dedicated query for live sticky).
  • Consider also filtering by workspace_id or project_id if environment IDs aren’t globally unique.

Minimal hardening (if you have workspace_id at call site):

 WHERE
-  environment_id = sqlc.arg(environment_id)
+  environment_id = sqlc.arg(environment_id)
+  AND workspace_id = sqlc.arg(workspace_id)
   AND sticky IN (sqlc.slice(sticky))
go/pkg/db/project_update_deployments.sql_generated.go (1)

13-20: Guard against lost updates; update within a transaction

The update sets live_deployment_id and is_rolled_back without an optimistic check. In high concurrency, this risks overwriting newer state.

  • Execute this together with domain reassignments in a single transaction.
  • Consider optimistic locking (e.g., WHERE id = ? AND updated_at = ? with the prior value) or a version column to avoid lost updates.

Also applies to: 22-27, 38-43

go/pkg/db/queries/domain_find_for_rollback.sql (1)

13-16: Constrain by workspace (and/or project) to harden tenancy.

Filtering only by environment_id + sticky may be sufficient if environment_id is globally unique. If not guaranteed, add workspace_id (and/or project_id) to reduce blast radius.

 WHERE
   environment_id = sqlc.arg(environment_id)
   AND sticky IN (sqlc.slice(sticky))
+  AND workspace_id = sqlc.arg(workspace_id)
 ORDER BY created_at ASC;

This will require updating call sites to pass workspace_id.

go/proto/ctrl/v1/deployment.proto (1)

136-137: Reserve removed RollbackResponse fields to avoid wire incompatibility.

If RollbackResponse previously had a domains field (e.g., = 1), reserve its number/name to prevent reuse by future fields.

-message RollbackResponse {}
+message RollbackResponse {
+  // Reserve old fields to preserve wire compatibility
+  reserved 1;
+  reserved "domains";
+}
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/deployments-list.tsx (2)

120-129: Use liveDeployment?.id in key to avoid object-identity churn.

Including the liveDeployment object in the key risks unnecessary remounts when the object identity changes. Use its id.

-          <DomainList
-            key={`${deployment.id}-${liveDeployment}-${project?.isRolledBack}`}
+          <DomainList
+            key={`${deployment.id}-${liveDeployment?.id}-${project?.isRolledBack}`}
             deploymentId={deployment.id}
             hackyRevalidateDependency={project?.liveDeploymentId}
           />

369-385: Add a skeleton for the new Domains column.

Skeleton rows don’t render anything for key "domains", causing visual jitter.

             {column.key === "source" && <SourceColumnSkeleton />}
+            {column.key === "domains" && (
+              <div className="h-4 w-40 bg-grayA-3 rounded" />
+            )}
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/promotion-dialog.tsx (2)

58-67: Consider removing or improving the refetch workaround.

The code uses @ts-expect-error comments and admits this is a "hack to revalidate". This approach could break if the underlying library implementation changes and makes the code harder to maintain.

Consider these alternatives:

  1. Use the utils.invalidate() call which is already present and should handle invalidation
  2. If specific invalidation is needed, use typed TRPC utils methods
  3. Document why standard invalidation is insufficient if this hack is truly necessary
 onSuccess: () => {
   utils.invalidate();
   toast.success("Promotion completed", {
     description: `Successfully promoted to deployment ${targetDeployment.id}`,
   });
-  // hack to revalidate
-  try {
-    // @ts-expect-error Their docs say it's here
-    collection.projects.utils.refetch();
-    // @ts-expect-error Their docs say it's here
-    collection.deployments.utils.refetch();
-    // @ts-expect-error Their docs say it's here
-    collection.domains.utils.refetch();
-  } catch (error) {
-    console.error("Refetch error:", error);
-  }
   onOpenChange(false);
 },

150-152: Improve ID truncation for better readability.

The current ID display shows only the first 3 and last 4 characters. Consider using the existing shortenId utility function that's already imported for consistency with other ID displays in the codebase.

Apply this diff for consistent ID formatting:

-{`${deployment.id.slice(0, 3)}...${deployment.id.slice(-4)}`}
+{shortenId(deployment.id)}
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/actions/deployment-list-table-action.popover.constants.tsx (1)

29-36: Allow promoting deployments that are live due to rollback

isRolledBack is a project-level flag (internal/db/src/schema/projects.ts and apps/dashboard/lib/collections/deploy/projects.ts); Deployment (apps/dashboard/lib/collections/deploy/deployments.ts) has no such field and DeploymentListTableActions currently doesn't receive project state. Fix: pass project?.isRolledBack (or the project) from deployments-list.tsx into deployment-list-table-action.popover.constants.tsx and update canPromote to allow promotion when the project is rolled back (e.g. keep existing checks and replace the final id check with (selectedDeployment.id !== liveDeployment.id || project?.isRolledBack)).

go/apps/ctrl/services/deployment/promote.go (1)

123-134: Domain reassignment should be transactional to avoid partial updates.

A failure mid-loop leaves some domains moved and others not, while gateway configs may already be updated. Wrap ReassignDomain calls in a single DB transaction (and consider reordering: commit DB changes before partition upserts or implement compensating logic).

I can draft a Tx version if helpful.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 4d6fcc6 and 74b76c1.

⛔ Files ignored due to path filters (4)
  • go/gen/proto/ctrl/v1/ctrlv1connect/deployment.connect.go is excluded by !**/gen/**
  • go/gen/proto/ctrl/v1/deployment.pb.go is excluded by !**/*.pb.go, !**/gen/**
  • internal/proto/generated/ctrl/v1/deployment_pb.ts is excluded by !**/generated/**
  • internal/proto/generated/metald/v1/metald_pb.ts is excluded by !**/generated/**
📒 Files selected for processing (40)
  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/promotion-dialog.tsx (1 hunks)
  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/rollback-dialog.tsx (3 hunks)
  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/actions/deployment-list-table-action.popover.constants.tsx (3 hunks)
  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/domain_list.tsx (1 hunks)
  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/deployments-list.tsx (8 hunks)
  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/utils/get-row-class.ts (2 hunks)
  • apps/dashboard/app/(app)/projects/[projectId]/deployments/hooks/use-deployments.ts (1 hunks)
  • apps/dashboard/app/(app)/projects/_components/create-project/create-project-dialog.tsx (1 hunks)
  • apps/dashboard/lib/collections/deploy/projects.ts (1 hunks)
  • apps/dashboard/lib/trpc/routers/deploy/deployment/promote.ts (1 hunks)
  • apps/dashboard/lib/trpc/routers/deploy/deployment/rollback.ts (2 hunks)
  • apps/dashboard/lib/trpc/routers/deploy/project/list.ts (3 hunks)
  • apps/dashboard/lib/trpc/routers/index.ts (2 hunks)
  • deploy.bash (1 hunks)
  • go/apps/ctrl/services/deployment/deploy_workflow.go (3 hunks)
  • go/apps/ctrl/services/deployment/promote.go (1 hunks)
  • go/apps/ctrl/services/deployment/rollback.go (5 hunks)
  • go/pkg/db/bulk_domain_insert.sql_generated.go (3 hunks)
  • go/pkg/db/domain_find_by_deployment_id.sql_generated.go (0 hunks)
  • go/pkg/db/domain_find_by_domain.sql_generated.go (1 hunks)
  • go/pkg/db/domain_find_for_promotion.sql_generated.go (1 hunks)
  • go/pkg/db/domain_find_for_rollback.sql_generated.go (1 hunks)
  • go/pkg/db/domain_insert.sql_generated.go (4 hunks)
  • go/pkg/db/domain_reassign.sql_generated.go (2 hunks)
  • go/pkg/db/models_generated.go (2 hunks)
  • go/pkg/db/project_find_by_id.sql_generated.go (3 hunks)
  • go/pkg/db/project_update_deployments.sql_generated.go (1 hunks)
  • go/pkg/db/querier_generated.go (6 hunks)
  • go/pkg/db/queries/domain_find_by_deployment_id.sql (1 hunks)
  • go/pkg/db/queries/domain_find_for_promotion.sql (1 hunks)
  • go/pkg/db/queries/domain_find_for_rollback.sql (1 hunks)
  • go/pkg/db/queries/domain_insert.sql (2 hunks)
  • go/pkg/db/queries/domain_reassign.sql (1 hunks)
  • go/pkg/db/queries/project_find_by_id.sql (1 hunks)
  • go/pkg/db/queries/project_update_deployments.sql (1 hunks)
  • go/pkg/db/schema.sql (2 hunks)
  • go/proto/ctrl/v1/deployment.proto (2 hunks)
  • internal/db/src/schema/domains.ts (2 hunks)
  • internal/db/src/schema/projects.ts (2 hunks)
  • internal/schema/src/auditlog.ts (1 hunks)
💤 Files with no reviewable changes (1)
  • go/pkg/db/domain_find_by_deployment_id.sql_generated.go
🧰 Additional context used
🧠 Learnings (6)
📚 Learning: 2025-07-25T19:09:43.284Z
Learnt from: mcstepp
PR: unkeyed/unkey#3662
File: apps/dashboard/lib/trpc/routers/deployment/list.ts:11-11
Timestamp: 2025-07-25T19:09:43.284Z
Learning: In apps/dashboard/lib/trpc/routers/deployment/list.ts, the listDeployments procedure intentionally queries the versions table rather than a deployments table. The user mcstepp indicated that renaming the table would require a database migration, which was deferred for the current PR focused on UI features.

Applied to files:

  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/domain_list.tsx
  • apps/dashboard/lib/trpc/routers/index.ts
  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/rollback-dialog.tsx
  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/actions/deployment-list-table-action.popover.constants.tsx
  • apps/dashboard/lib/trpc/routers/deploy/deployment/promote.ts
  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/deployments-list.tsx
  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/utils/get-row-class.ts
  • apps/dashboard/lib/trpc/routers/deploy/deployment/rollback.ts
  • apps/dashboard/app/(app)/projects/[projectId]/deployments/hooks/use-deployments.ts
  • apps/dashboard/lib/trpc/routers/deploy/project/list.ts
📚 Learning: 2025-07-25T19:11:00.208Z
Learnt from: mcstepp
PR: unkeyed/unkey#3662
File: apps/dashboard/lib/trpc/routers/deployment/getOpenApiDiff.ts:110-147
Timestamp: 2025-07-25T19:11:00.208Z
Learning: In apps/dashboard/lib/trpc/routers/deployment/getOpenApiDiff.ts, the user mcstepp prefers to keep mock data fallbacks in POC/demonstration code for simplicity, even if it wouldn't be production-ready. This aligns with the PR being work-in-progress for demonstration purposes.

Applied to files:

  • apps/dashboard/lib/trpc/routers/index.ts
  • apps/dashboard/lib/trpc/routers/deploy/deployment/promote.ts
  • apps/dashboard/lib/trpc/routers/deploy/deployment/rollback.ts
  • apps/dashboard/lib/trpc/routers/deploy/project/list.ts
📚 Learning: 2025-09-12T18:11:33.481Z
Learnt from: mcstepp
PR: unkeyed/unkey#3952
File: apps/dashboard/lib/trpc/routers/deploy/rollback.ts:23-24
Timestamp: 2025-09-12T18:11:33.481Z
Learning: In apps/dashboard/lib/trpc/routers/deploy/rollback.ts, the CTRL_URL environment variable should fail fast with a clear error message if missing in non-development environments, rather than defaulting to localhost which can mask production configuration issues.

Applied to files:

  • apps/dashboard/lib/trpc/routers/index.ts
  • apps/dashboard/lib/trpc/routers/deploy/deployment/promote.ts
  • apps/dashboard/lib/trpc/routers/deploy/deployment/rollback.ts
📚 Learning: 2025-09-12T17:57:18.337Z
Learnt from: mcstepp
PR: unkeyed/unkey#3952
File: apps/dashboard/app/(app)/projects/[projectId]/deployments/components/rollback-dialog.tsx:70-79
Timestamp: 2025-09-12T17:57:18.337Z
Learning: In the deployment rollback functionality, self-rollback scenarios are prevented at the UI level in the actions menu through the `canRollback` condition which includes `!("active" in deployment && deployment.active)`, making additional checks in the dialog component redundant.

Applied to files:

  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/rollback-dialog.tsx
  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/actions/deployment-list-table-action.popover.constants.tsx
📚 Learning: 2025-09-12T17:57:18.337Z
Learnt from: mcstepp
PR: unkeyed/unkey#3952
File: apps/dashboard/app/(app)/projects/[projectId]/deployments/components/rollback-dialog.tsx:70-79
Timestamp: 2025-09-12T17:57:18.337Z
Learning: In the rollback functionality, the actions menu prevents self-rollback scenarios by including `!deployment.active` in the `canRollback` condition, ensuring only inactive deployments show the rollback option.

Applied to files:

  • apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/actions/deployment-list-table-action.popover.constants.tsx
📚 Learning: 2025-09-11T14:24:40.988Z
Learnt from: Flo4604
PR: unkeyed/unkey#3944
File: go/apps/ctrl/services/deployment/deploy_workflow.go:326-334
Timestamp: 2025-09-11T14:24:40.988Z
Learning: The InsertDomains method in the bulk queries uses ON DUPLICATE KEY UPDATE, making it an upsert operation that is idempotent and safe for retries, despite the "Insert" naming convention.

Applied to files:

  • go/pkg/db/bulk_domain_insert.sql_generated.go
🧬 Code graph analysis (15)
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/domain_list.tsx (2)
apps/dashboard/app/(app)/projects/[projectId]/layout-provider.tsx (1)
  • useProjectLayout (13-19)
internal/db/src/schema/domains.ts (1)
  • domains (4-36)
go/apps/ctrl/services/deployment/rollback.go (4)
go/pkg/db/domain_find_for_rollback.sql_generated.go (1)
  • FindDomainsForRollbackParams (32-35)
go/pkg/db/models_generated.go (4)
  • NullDomainsSticky (259-262)
  • DomainsSticky (239-239)
  • DomainsStickyLive (244-244)
  • DomainsStickyEnvironment (243-243)
internal/proto/generated/ctrl/v1/deployment_pb.ts (1)
  • RollbackResponse (447-447)
go/gen/proto/ctrl/v1/deployment.pb.go (3)
  • RollbackResponse (900-904)
  • RollbackResponse (917-917)
  • RollbackResponse (932-934)
go/pkg/db/domain_insert.sql_generated.go (1)
go/pkg/db/models_generated.go (3)
  • Domain (598-609)
  • DomainsType (282-282)
  • NullDomainsSticky (259-262)
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/rollback-dialog.tsx (1)
apps/dashboard/lib/collections/index.ts (1)
  • collection (72-76)
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/actions/deployment-list-table-action.popover.constants.tsx (2)
apps/dashboard/components/logs/table-action.popover.tsx (1)
  • TableActionPopover (29-148)
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/promotion-dialog.tsx (1)
  • PromotionDialog (35-135)
apps/dashboard/lib/trpc/routers/deploy/deployment/promote.ts (5)
apps/dashboard/lib/trpc/trpc.ts (4)
  • t (8-8)
  • requireUser (10-21)
  • requireWorkspace (23-36)
  • withRatelimit (122-138)
apps/dashboard/lib/env.ts (1)
  • env (3-51)
internal/proto/generated/ctrl/v1/deployment_pb.ts (1)
  • DeploymentService (569-610)
apps/dashboard/lib/db.ts (1)
  • db (5-26)
apps/dashboard/lib/audit.ts (1)
  • insertAuditLogs (103-158)
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/deployments-list.tsx (3)
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/env-status-badge.tsx (1)
  • EnvStatusBadge (29-42)
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/deployment-status-badge.tsx (1)
  • DeploymentStatusBadge (77-123)
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/domain_list.tsx (1)
  • DomainList (13-31)
go/pkg/db/domain_find_for_promotion.sql_generated.go (1)
go/pkg/db/models_generated.go (2)
  • NullDomainsSticky (259-262)
  • Domain (598-609)
go/pkg/db/domain_find_for_rollback.sql_generated.go (1)
go/pkg/db/models_generated.go (1)
  • NullDomainsSticky (259-262)
go/apps/ctrl/services/deployment/promote.go (9)
internal/proto/generated/ctrl/v1/deployment_pb.ts (2)
  • PromoteRequest (460-465)
  • PromoteResponse (478-478)
go/pkg/db/handle_err_no_rows.go (1)
  • IsNotFound (8-10)
go/pkg/assert/all.go (1)
  • All (19-26)
go/pkg/db/models_generated.go (6)
  • DeploymentsStatusReady (200-200)
  • NullDomainsSticky (259-262)
  • DomainsSticky (239-239)
  • DomainsStickyLive (244-244)
  • DomainsStickyEnvironment (243-243)
  • Domain (598-609)
go/pkg/partition/db/models_generated.go (1)
  • VmsStatusRunning (63-63)
go/pkg/db/domain_find_for_promotion.sql_generated.go (1)
  • FindDomainsForPromotionParams (32-35)
go/pkg/partition/db/gateway_upsert.sql_generated.go (1)
  • UpsertGatewayParams (31-36)
go/pkg/db/domain_reassign.sql_generated.go (1)
  • ReassignDomainParams (22-27)
go/pkg/db/project_update_deployments.sql_generated.go (1)
  • UpdateProjectDeploymentsParams (22-27)
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/promotion-dialog.tsx (6)
internal/proto/generated/ctrl/v1/deployment_pb.ts (1)
  • Deployment (172-301)
apps/dashboard/lib/collections/index.ts (2)
  • collectionManager (69-69)
  • collection (72-76)
internal/db/src/schema/domains.ts (1)
  • domains (4-36)
apps/dashboard/lib/trpc/routers/deploy/deployment/promote.ts (1)
  • promote (14-116)
apps/dashboard/lib/shorten-id.ts (1)
  • shortenId (5-54)
apps/dashboard/app/(app)/projects/[projectId]/details/active-deployment-card/status-indicator.tsx (1)
  • StatusIndicator (4-38)
go/pkg/db/domain_find_by_domain.sql_generated.go (1)
go/pkg/db/models_generated.go (1)
  • Domain (598-609)
go/apps/ctrl/services/deployment/deploy_workflow.go (3)
go/pkg/db/models_generated.go (1)
  • Domain (598-609)
go/pkg/db/domain_reassign.sql_generated.go (1)
  • ReassignDomainParams (22-27)
go/pkg/db/project_update_deployments.sql_generated.go (1)
  • UpdateProjectDeploymentsParams (22-27)
go/pkg/db/querier_generated.go (2)
go/pkg/db/domain_find_for_promotion.sql_generated.go (2)
  • FindDomainsForPromotionParams (32-35)
  • FindDomainsForPromotionRow (37-47)
go/pkg/db/domain_find_for_rollback.sql_generated.go (2)
  • FindDomainsForRollbackParams (32-35)
  • FindDomainsForRollbackRow (37-47)
apps/dashboard/lib/trpc/routers/deploy/project/list.ts (2)
apps/dashboard/lib/collections/deploy/projects.ts (1)
  • projects (42-108)
internal/db/src/schema/projects.ts (1)
  • projects (8-31)
🪛 Shellcheck (0.11.0)
deploy.bash

[warning] 5-5: i appears unused. Verify use (or export if used externally).

(SC2034)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: Test API / API Test Local
  • GitHub Check: Test Go API Local / Test
  • GitHub Check: Test Packages / Test
  • GitHub Check: Build / Build
  • GitHub Check: Vercel Agent Review
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (34)
apps/dashboard/app/(app)/projects/[projectId]/deployments/hooks/use-deployments.ts (1)

108-110: No functional change. LGTM.

apps/dashboard/lib/collections/deploy/projects.ts (1)

14-15: Project schema field rename to isRolledBack: LGTM.

Matches DB and UI usage; safer than nullable id.

apps/dashboard/app/(app)/projects/_components/create-project/create-project-dialog.tsx (1)

43-44: Schema switch to isRolledBack looks correct; confirm no stale rolledBackDeploymentId references remain.
rg returned no matches for rolledBackDeploymentId / rolled_back_deployment_id — check server code, DTOs/API types, and DB migrations for any lingering references.

internal/schema/src/auditlog.ts (1)

59-60: Add of "deployment.promote" event — LGTM; consumers accept it.
Search shows consumers use the z.enum (unkeyAuditLogEvents) or derive validEventTypes from it (so filters/parsers update automatically); the only literal "deployment.promote" is in the promote handler that emits the event. No hardcoded UI filter lists found.

apps/dashboard/lib/trpc/routers/deploy/deployment/rollback.ts (1)

111-111: Return shape changed to {}; verified no clients rely on domains payload.

Only consumer found: apps/dashboard/app/(app)/projects/[projectId]/deployments/components/rollback-dialog.tsx — it does not read the mutation result (uses utils.invalidate + manual collection.*.utils.refetch()). apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/domain_list.tsx queries domains via useLiveQuery and does not consume rollback payload (notes a separate revalidation issue). No other references to deploy.deployment.rollback were found.

go/apps/ctrl/services/deployment/rollback.go (3)

104-111: Good switch to environment-scoped, sticky-filtered domain selection.

Using FindDomainsForRollback with environment_id and sticky in {live, environment} matches the intended rollback semantics.


138-142: LGTM: ReassignDomain payload aligns with new schema.

Passing DeploymentID (nullable) and dropping the old IsRolledBack field is consistent with the updated ReassignDomainParams.


175-180: is_rolled_back lifecycle: confirmed — rollback sets true, promote clears it.
Rollback sets IsRolledBack = true (go/apps/ctrl/services/deployment/rollback.go:175-180). Promote sets IsRolledBack = false (go/apps/ctrl/services/deployment/promote.go:136-141). deploy_workflow checks project.IsRolledBack and skips domain/pointer updates (go/apps/ctrl/services/deployment/deploy_workflow.go:311,407-410).

apps/dashboard/lib/trpc/routers/index.ts (2)

44-45: Wire-up looks good.

Importing and exposing promote alongside rollback is consistent with existing deployment routes.


333-335: LGTM: Added deploy.deployment.promote route.

Matches the new promote feature and UI integration.

go/pkg/db/models_generated.go (2)

599-608: LGTM: Domain model reflects environment_id addition and sticky semantics.

Matches schema and the new environment-scoped domain queries.


717-727: LGTM: Project model now surfaces is_rolled_back.

Aligned with the schema change and rollback flow updates.

go/pkg/db/domain_find_by_domain.sql_generated.go (1)

12-14: LGTM; EnvironmentID added and scan order matches SELECT

The SELECT/Scan shape aligns with the updated Domain struct.

Also applies to: 22-27

go/pkg/db/queries/project_find_by_id.sql (1)

11-11: LGTM — downstreams updated to boolean is_rolled_back
No remaining uses of rolled_back_deployment_id; consumers now reference is_rolled_back (schema, generated Go files, TS schema, dashboard router).

go/pkg/db/queries/project_update_deployments.sql (1)

5-5: LGTM: switch to is_rolled_back flag.

Update is correct and aligns with generated querier.

apps/dashboard/lib/trpc/routers/deploy/project/list.ts (1)

14-15: LGTM: expose isRolledBack, map to boolean.

Selection and mapping are consistent with schema (is_rolled_back → isRolledBack).

Ensure the public Project type includes isRolledBack: boolean and all consumers are updated (get-row-class, table badges).

Also applies to: 36-37, 62-63

go/pkg/db/domain_insert.sql_generated.go (1)

18-19: LGTM: environment_id added to Domain insert.

Struct, SQL, and arg order are consistent.

Also applies to: 39-49, 76-87

apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/deployments-list.tsx (1)

90-96: LGTM: rolled back/live badge logic.

Conditional display based on isLive and project.isRolledBack is clear.

go/pkg/db/querier_generated.go (3)

193-211: LGTM: Add FindDomainsForPromotion.

Interface additions and comments align with new SQL and environment-aware domain flows.


212-228: LGTM: Add FindDomainsForRollback.

Matches the new query and supports the rollback workflow.


635-640: LGTM: Project includes is_rolled_back.

Generated interface reflects schema change.

go/proto/ctrl/v1/deployment.proto (1)

154-156: LGTM — Promote RPC added; codegen & server present.

Generated client/server include Promote (go/gen/proto/ctrl/v1/ctrlv1connect/deployment.connect.go) and the server handler exists (go/apps/ctrl/services/deployment/promote.go). RollbackResponse is an empty proto (go/proto/ctrl/v1/deployment.proto) and no references to RollbackResponse.domains were found.

apps/dashboard/lib/trpc/routers/deploy/deployment/promote.ts (3)

24-31: Consistent environment configuration error handling.

The CTRL_URL check follows the established pattern from the rollback implementation, correctly failing fast with PRECONDITION_FAILED when not configured. This maintains consistency across deployment operations.


42-67: LGTM! Robust deployment validation.

The deployment verification logic correctly validates workspace ownership and deployment readiness status before allowing promotion. The error messages are clear and informative.


88-103: Audit logging implementation looks good.

The audit log entry correctly captures the promotion event with appropriate context, actor information, and resource details.

apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/utils/get-row-class.ts (2)

38-47: LGTM! Improved visual distinction for rolled-back deployments.

The updated warning color scheme (warning-2 through warning-6) provides better visual hierarchy and clearer state indication for rolled-back deployments compared to the previous styling.


49-61: Well-structured state determination logic.

The refactored function signature and conditional logic correctly prioritizes deployment states (failed > rolled-back > normal) and properly uses the new boolean isRolledBack flag in combination with liveDeploymentId for accurate state determination.

go/pkg/db/project_find_by_id.sql_generated.go (1)

31-42: LGTM! Clean migration to boolean flag.

The change from RolledBackDeploymentID (nullable string) to IsRolledBack (boolean) simplifies the data model and makes the rollback state more explicit and easier to work with.

go/apps/ctrl/services/deployment/promote.go (1)

47-53: Preconditions may block first-time promotion.

Requiring project.LiveDeploymentID.Valid prevents promoting when there is no current live. Is that intentional? If not, drop this assertion or gate it to allow the first promotion.

go/apps/ctrl/services/deployment/deploy_workflow.go (1)

311-318: Skip-on-rollback logic acknowledged.

Using project.IsRolledBack to guard domain changes aligns with the new semantics.

go/pkg/db/domain_find_for_rollback.sql_generated.go (2)

66-77: Same empty-sticky behavior as promotion; verify callers.

IN (NULL) yields no matches when Sticky is empty. Ensure all callers pass non-empty sticky slices.

Use the same script pattern for FindDomainsForRollback call sites:

#!/bin/bash
set -euo pipefail
rg -nP '\bFindDomainsForRollback\(' -C3

32-35: Confirm NullDomainsSticky implements Valuer/Scanner (duplicate).

Same concern as promotion query.

go/pkg/db/domain_find_for_promotion.sql_generated.go (1)

66-78: No action required — callers pass non-empty Sticky slices

FindDomainsForPromotion falls back to IN (NULL) when Sticky is empty (matches no rows); verified call sites apps/ctrl/services/deployment/promote.go and apps/ctrl/services/deployment/rollback.go pass non-empty Sticky slices (live, environment) and no other callers were found.

go/pkg/db/domain_reassign.sql_generated.go (1)

22-27: API change looks correct — callers updated

ReassignDomain now uses DeploymentID; verified callers updated in go/apps/ctrl/services/deployment/{rollback.go, deploy_workflow.go, promote.go}. No TargetDeploymentID usages remain and the SQL argument order matches Exec.

Comment on lines +78 to +86
const handlePromotion = async () => {
await promote
.mutateAsync({
targetDeploymentId: targetDeployment.id,
})
.catch((error) => {
console.error("Promotion error:", error);
});
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Remove redundant error handling in handlePromotion.

The error is already handled by the mutation's onError callback, making the additional .catch() redundant. The current implementation could also prevent errors from being properly surfaced to the UI.

Apply this diff to simplify the error handling:

 const handlePromotion = async () => {
-  await promote
-    .mutateAsync({
-      targetDeploymentId: targetDeployment.id,
-    })
-    .catch((error) => {
-      console.error("Promotion error:", error);
-    });
+  await promote.mutateAsync({
+    targetDeploymentId: targetDeployment.id,
+  });
 };
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const handlePromotion = async () => {
await promote
.mutateAsync({
targetDeploymentId: targetDeployment.id,
})
.catch((error) => {
console.error("Promotion error:", error);
});
};
const handlePromotion = async () => {
await promote.mutateAsync({
targetDeploymentId: targetDeployment.id,
});
};
🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/promotion-dialog.tsx
around lines 78 to 86, the handlePromotion function attaches a redundant
.catch() to promote.mutateAsync even though the mutation already handles errors
via its onError callback; remove the .catch(...) chain and simply await
promote.mutateAsync({ targetDeploymentId: targetDeployment.id }) so errors are
propagated to the mutation's handlers and not swallowed or double-handled by
this function.

import { shortenId } from "@/lib/shorten-id";
import { trpc } from "@/lib/trpc/client";
import { eq, inArray, useLiveQuery } from "@tanstack/react-db";
import { inArray, useLiveQuery } from "@tanstack/react-db";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Scope rollback domains to environment_id (and re-add eq).

Without filtering by environment_id, the dialog may show domains from other environments. Align this with FindDomainsForRollback (env + sticky filter) to avoid cross-environment leakage.

Apply:

-import { inArray, useLiveQuery } from "@tanstack/react-db";
+import { and, eq, inArray, useLiveQuery } from "@tanstack/react-db";
@@
-  const domains = useLiveQuery((q) =>
-    q
-      .from({ domain: domainCollection })
-      .where(({ domain }) => inArray(domain.sticky, ["environment", "live"])),
-  );
+  const domains = useLiveQuery((q) =>
+    q
+      .from({ domain: domainCollection })
+      .where(({ domain }) =>
+        and(
+          eq(domain.environmentId, liveDeployment.environmentId),
+          inArray(domain.sticky, ["environment", "live"]),
+        ),
+      )
+      .orderBy(({ domain }) => domain.domain, "asc"),
+  );

Also applies to: 45-49

🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/rollback-dialog.tsx
around lines 6 and 45-49, the rollback domain selection currently omits an
environment_id filter and removed the eq import; update the top import to
include eq (e.g., import { inArray, useLiveQuery, eq } from
"@tanstack/react-db") and modify the domain query to add an
eq(domain.environment_id, currentEnvironmentId) predicate (alongside the
existing sticky/inArray filters) so results are scoped to the current
environment, matching FindDomainsForRollback’s env + sticky behavior.

Comment on lines +21 to +22
[hackyRevalidateDependency],
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Include deploymentId in the useLiveQuery dependency array.

If the component is reused with a different deploymentId but the revalidate key stays unchanged, the query won’t rerun.

-    [hackyRevalidateDependency],
+    [deploymentId, hackyRevalidateDependency],
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
[hackyRevalidateDependency],
);
[deploymentId, hackyRevalidateDependency],
);
🤖 Prompt for AI Agents
In
apps/dashboard/app/(app)/projects/[projectId]/deployments/components/table/components/domain_list.tsx
around lines 21 to 22, the useLiveQuery dependency array only contains
hackyRevalidateDependency and must also include deploymentId so the query reruns
when the component is reused with a different deployment; update the dependency
array to include deploymentId (and keep existing dependencies) so the hook will
re-evaluate whenever deploymentId changes.

Comment on lines +76 to +86
await ctrl
.promote({
targetDeploymentId: targetDeployment.id,
})
.catch((err) => {
console.error(err);
throw new TRPCError({
code: "INTERNAL_SERVER_ERROR",
message: err,
});
});
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Consider improving error message propagation.

When the ctrl.promote call fails, the error is being passed directly as the message to TRPCError. This could expose internal implementation details or result in unclear error messages for users.

Apply this diff to improve error handling:

 await ctrl
   .promote({
     targetDeploymentId: targetDeployment.id,
   })
   .catch((err) => {
     console.error(err);
     throw new TRPCError({
       code: "INTERNAL_SERVER_ERROR",
-      message: err,
+      message: "Failed to promote deployment",
     });
   });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await ctrl
.promote({
targetDeploymentId: targetDeployment.id,
})
.catch((err) => {
console.error(err);
throw new TRPCError({
code: "INTERNAL_SERVER_ERROR",
message: err,
});
});
await ctrl
.promote({
targetDeploymentId: targetDeployment.id,
})
.catch((err) => {
console.error(err);
throw new TRPCError({
code: "INTERNAL_SERVER_ERROR",
message: "Failed to promote deployment",
});
});
🤖 Prompt for AI Agents
In apps/dashboard/lib/trpc/routers/deploy/deployment/promote.ts around lines 76
to 86, the catch block currently passes the raw err directly as the TRPCError
message which may leak internals or be non-string; change it to throw a
TRPCError with a concise, user-friendly message (e.g. "Failed to promote
deployment") and attach the original error as the cause or in metadata for
debugging; also keep a full internal console.error/log of err for operators but
ensure the TRPCError.message is a safe string and not the raw error object.

Comment on lines 318 to 323
updateErr := db.Query.ReassignDomain(txCtx, tx, db.ReassignDomainParams{
ID: existing.ID,
TargetWorkspaceID: workspace.ID,
TargetDeploymentID: sql.NullString{Valid: true, String: req.DeploymentID},
IsRolledBack: false,
ID: existing.ID,
TargetWorkspaceID: workspace.ID,
DeploymentID: sql.NullString{Valid: true, String: req.DeploymentID},
})

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

ReassignDomain sets updated_at to NULL; pass UpdatedAt.

ReassignDomain SQL always sets updated_at = ?. Omitting UpdatedAt here writes NULL.

Apply:

-                updateErr := db.Query.ReassignDomain(txCtx, tx, db.ReassignDomainParams{
+                updateErr := db.Query.ReassignDomain(txCtx, tx, db.ReassignDomainParams{
                     ID:                existing.ID,
                     TargetWorkspaceID: workspace.ID,
-                    DeploymentID:      sql.NullString{Valid: true, String: req.DeploymentID},
+                    DeploymentID:      sql.NullString{Valid: true, String: req.DeploymentID},
+                    UpdatedAt:         sql.NullInt64{Valid: true, Int64: time.Now().UnixMilli()},
                 })

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In go/apps/ctrl/services/deployment/deploy_workflow.go around lines 318 to 323,
the call to db.Query.ReassignDomain omits the UpdatedAt field causing updated_at
to be written as NULL; include the existing domain's UpdatedAt value in the
ReassignDomainParams (e.g., set UpdatedAt: existing.UpdatedAt or wrap it
appropriately as sql.NullTime) so the SQL parameter is populated instead of
NULL.

Comment on lines +407 to 414
if !project.IsRolledBack {
// only update this if the deployment is not rolled back
err = hydra.StepVoid(ctx, "update-project-deployment-pointers", func(stepCtx context.Context) error {
return db.Query.UpdateProjectDeployments(stepCtx, w.db.RW(), db.UpdateProjectDeploymentsParams{
ID: req.ProjectID,
LiveDeploymentID: sql.NullString{Valid: true, String: req.DeploymentID},
RolledBackDeploymentID: sql.NullString{Valid: false, String: ""},
UpdatedAt: sql.NullInt64{Valid: true, Int64: time.Now().UnixMilli()},
ID: req.ProjectID,
LiveDeploymentID: sql.NullString{Valid: true, String: req.DeploymentID},
UpdatedAt: sql.NullInt64{Valid: true, Int64: time.Now().UnixMilli()},
})
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Auto-updating LiveDeploymentID conflicts with “promote to go live”.

This makes new deployments live automatically when project.IsRolledBack is false, contradicting the intended “explicit promotion” behavior reported in review comments.

Consider either:

  • Only set LiveDeploymentID automatically if no live deployment exists yet, or
  • Remove this block and require explicit promotion.

Example:

-    if !project.IsRolledBack {
+    if !project.LiveDeploymentID.Valid {
         err = hydra.StepVoid(ctx, "update-project-deployment-pointers", func(stepCtx context.Context) error {
             return db.Query.UpdateProjectDeployments(stepCtx, w.db.RW(), db.UpdateProjectDeploymentsParams{
                 ID:               req.ProjectID,
                 LiveDeploymentID: sql.NullString{Valid: true, String: req.DeploymentID},
                 UpdatedAt:        sql.NullInt64{Valid: true, Int64: time.Now().UnixMilli()},
             })
         })
     }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if !project.IsRolledBack {
// only update this if the deployment is not rolled back
err = hydra.StepVoid(ctx, "update-project-deployment-pointers", func(stepCtx context.Context) error {
return db.Query.UpdateProjectDeployments(stepCtx, w.db.RW(), db.UpdateProjectDeploymentsParams{
ID: req.ProjectID,
LiveDeploymentID: sql.NullString{Valid: true, String: req.DeploymentID},
RolledBackDeploymentID: sql.NullString{Valid: false, String: ""},
UpdatedAt: sql.NullInt64{Valid: true, Int64: time.Now().UnixMilli()},
ID: req.ProjectID,
LiveDeploymentID: sql.NullString{Valid: true, String: req.DeploymentID},
UpdatedAt: sql.NullInt64{Valid: true, Int64: time.Now().UnixMilli()},
})
if !project.LiveDeploymentID.Valid {
// only update this if the deployment is not rolled back
err = hydra.StepVoid(ctx, "update-project-deployment-pointers", func(stepCtx context.Context) error {
return db.Query.UpdateProjectDeployments(stepCtx, w.db.RW(), db.UpdateProjectDeploymentsParams{
ID: req.ProjectID,
LiveDeploymentID: sql.NullString{Valid: true, String: req.DeploymentID},
UpdatedAt: sql.NullInt64{Valid: true, Int64: time.Now().UnixMilli()},
})
🤖 Prompt for AI Agents
In go/apps/ctrl/services/deployment/deploy_workflow.go around lines 407-414, the
code auto-sets LiveDeploymentID for non-rolled-back deployments which breaks the
intended “explicit promotion” flow; change the behavior to only update
LiveDeploymentID when there is currently no live deployment (i.e., check the
project's existing LiveDeploymentID is null/empty before calling
UpdateProjectDeployments), or remove this update block entirely to require
explicit promotion—if you choose the conditional approach, fetch the current
project (or lock the row) inside the same transaction/context and only call
UpdateProjectDeployments when LiveDeploymentID is absent to avoid race
conditions.

Comment on lines +107 to +115
gatewayChanges := make([]pdb.UpsertGatewayParams, len(domains))
for i, domain := range domains {
gatewayChanges[i] = pdb.UpsertGatewayParams{
WorkspaceID: domain.WorkspaceID,
DeploymentID: targetDeployment.ID,
Hostname: domain.Domain,
Config: gatewayConfig.Config,
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Partition DB workspace_id mismatch (use target workspace).

Gateway upserts use domain.WorkspaceID, but you reassign the domain to targetDeployment.WorkspaceID. This can desync partition DB from control DB and may break uniqueness/ownership semantics.

Apply:

-        gatewayChanges[i] = pdb.UpsertGatewayParams{
-            WorkspaceID:  domain.WorkspaceID,
+        gatewayChanges[i] = pdb.UpsertGatewayParams{
+            WorkspaceID:  targetDeployment.WorkspaceID,
             DeploymentID: targetDeployment.ID,
             Hostname:     domain.Domain,
             Config:       gatewayConfig.Config,
         }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
gatewayChanges := make([]pdb.UpsertGatewayParams, len(domains))
for i, domain := range domains {
gatewayChanges[i] = pdb.UpsertGatewayParams{
WorkspaceID: domain.WorkspaceID,
DeploymentID: targetDeployment.ID,
Hostname: domain.Domain,
Config: gatewayConfig.Config,
}
}
gatewayChanges := make([]pdb.UpsertGatewayParams, len(domains))
for i, domain := range domains {
gatewayChanges[i] = pdb.UpsertGatewayParams{
WorkspaceID: targetDeployment.WorkspaceID,
DeploymentID: targetDeployment.ID,
Hostname: domain.Domain,
Config: gatewayConfig.Config,
}
}
🤖 Prompt for AI Agents
In go/apps/ctrl/services/deployment/promote.go around lines 107 to 115, the
UpsertGatewayParams currently uses domain.WorkspaceID; change the WorkspaceID
field to targetDeployment.WorkspaceID so the partition DB writes use the target
deployment's workspace. Update the loop to set WorkspaceID:
targetDeployment.WorkspaceID for each gatewayChanges entry (leave other fields
as-is).


// bulkInsertDomain is the base query for bulk insert
const bulkInsertDomain = `INSERT INTO domains ( id, workspace_id, project_id, deployment_id, domain, type, sticky, created_at, updated_at ) VALUES %s`
const bulkInsertDomain = `INSERT INTO domains ( id, workspace_id, project_id, environment_id, deployment_id, domain, type, sticky, created_at, updated_at ) VALUES %s`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Potential regression: bulk insert lost upsert/idempotency

Previously, InsertDomains was idempotent via ON DUPLICATE KEY UPDATE (per prior learnings). This version appears to be plain INSERT, which can cause duplicate-key errors and break retry safety in workflows.

Apply upsert semantics to restore idempotency. Example:

-const bulkInsertDomain = `INSERT INTO domains ( id, workspace_id, project_id, environment_id, deployment_id, domain, type, sticky, created_at, updated_at ) VALUES %s`
+const bulkInsertDomain = `INSERT INTO domains ( id, workspace_id, project_id, environment_id, deployment_id, domain, type, sticky, created_at, updated_at ) VALUES %s
+ON DUPLICATE KEY UPDATE
+  workspace_id = VALUES(workspace_id),
+  project_id = VALUES(project_id),
+  environment_id = VALUES(environment_id),
+  deployment_id = VALUES(deployment_id),
+  type = VALUES(type),
+  sticky = VALUES(sticky)`

Also applies to: 24-25, 35-36


🏁 Script executed:

#!/bin/bash
# Verify whether ON DUPLICATE KEY UPDATE exists anywhere for bulk domain insert
rg -n 'InsertDomains\(' -C3
rg -n 'ON DUPLICATE KEY UPDATE' go/pkg/db/bulk_domain_insert.sql_generated.go

Length of output: 1605


Restore idempotent bulk upsert for domains (add ON DUPLICATE KEY UPDATE)

go/pkg/db/bulk_domain_insert.sql_generated.go currently defines a plain INSERT (const bulkInsertDomain) — this removes the prior upsert/idempotency and will cause duplicate-key errors on retries. Restore ON DUPLICATE KEY UPDATE.

-const bulkInsertDomain = `INSERT INTO domains ( id, workspace_id, project_id, environment_id, deployment_id, domain, type, sticky, created_at, updated_at ) VALUES %s`
+const bulkInsertDomain = `INSERT INTO domains ( id, workspace_id, project_id, environment_id, deployment_id, domain, type, sticky, created_at, updated_at ) VALUES %s
+ON DUPLICATE KEY UPDATE
+  workspace_id = VALUES(workspace_id),
+  project_id = VALUES(project_id),
+  environment_id = VALUES(environment_id),
+  deployment_id = VALUES(deployment_id),
+  type = VALUES(type),
+  sticky = VALUES(sticky)`
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const bulkInsertDomain = `INSERT INTO domains ( id, workspace_id, project_id, environment_id, deployment_id, domain, type, sticky, created_at, updated_at ) VALUES %s`
const bulkInsertDomain = `INSERT INTO domains ( id, workspace_id, project_id, environment_id, deployment_id, domain, type, sticky, created_at, updated_at ) VALUES %s
ON DUPLICATE KEY UPDATE
workspace_id = VALUES(workspace_id),
project_id = VALUES(project_id),
environment_id = VALUES(environment_id),
deployment_id = VALUES(deployment_id),
type = VALUES(type),
sticky = VALUES(sticky)`
🤖 Prompt for AI Agents
In go/pkg/db/bulk_domain_insert.sql_generated.go around line 12, the SQL
constant bulkInsertDomain is a plain INSERT which breaks idempotent upserts;
modify the constant to append an ON DUPLICATE KEY UPDATE clause that updates the
mutable columns on conflict (for example: workspace_id=VALUES(workspace_id),
project_id=VALUES(project_id), environment_id=VALUES(environment_id),
deployment_id=VALUES(deployment_id), domain=VALUES(domain), type=VALUES(type),
sticky=VALUES(sticky), updated_at=VALUES(updated_at)) and do not overwrite
created_at.

Comment on lines +32 to +35
type FindDomainsForPromotionParams struct {
EnvironmentID sql.NullString `db:"environment_id"`
Sticky []NullDomainsSticky `db:"sticky"`
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

Confirm driver support for NullDomainsSticky params.

Passing []NullDomainsSticky into placeholders requires Value/Scan implementations. Verify sqlc generated those for NullDomainsSticky to avoid “unsupported type” at runtime.


🏁 Script executed:

#!/bin/bash
set -euo pipefail
# Look for Valuer/Scanner on NullDomainsSticky
rg -nP 'type\s+NullDomainsSticky\b' -C3
rg -nP 'func\s+\(\s*\*?NullDomainsSticky\)\s+Scan\(' -C3
rg -nP 'func\s+\(\s*\*?NullDomainsSticky\)\s+Value\(' -C3

Length of output: 381


Implement database/sql Scanner and driver.Valuer for NullDomainsSticky

No Value/Scan methods found — NullDomainsSticky is defined at go/pkg/db/models_generated.go:259-262 and used as []NullDomainsSticky in go/pkg/db/domain_find_for_promotion.sql_generated.go:32-35. Add driver.Valuer and sql.Scanner implementations (or change the param to a driver-supported type like string/[]byte or pq.Array) to avoid "unsupported type" runtime errors.

🤖 Prompt for AI Agents
In go/pkg/db/domain_find_for_promotion.sql_generated.go around lines 32-35, the
parameter type []NullDomainsSticky is not supported by database/sql at runtime;
implement database/sql.Scanner and driver.Valuer on the NullDomainsSticky type
defined at go/pkg/db/models_generated.go:259-262 (or replace the param with a
DB-supported type). Add a Value() (driver.Value, error) that returns nil for
null or a []byte/string (e.g., JSON-encoded) suitable for the DB, and a Scan(src
interface{}) error that accepts []byte/string/null and unmarshals/parses back
into the NullDomainsSticky fields, handling NULLs correctly; alternatively
change the param in domain_find_for_promotion.sql_generated.go to a supported
representation (e.g., []string, []byte, or pq.StringArray) and update call sites
accordingly.

Comment on lines +6 to 7
environment_id,
deployment_id,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

🧩 Analysis chain

LGTM; call sites must populate environment_id

Insertion order and new column look correct. Please ensure all insert paths set environment_id appropriately (environment-sticky vs live/branch cases), else rows may end up with NULL unexpectedly.

Also applies to: 17-18


🏁 Script executed:

#!/bin/bash
# Inspect usage to ensure environment_id is being passed
rg -nP 'InsertDomain\('
rg -nP 'InsertDomainParams\s*\{[^}]*environment_id|EnvironmentID' --type=go

Length of output: 3567


🏁 Script executed:

#!/bin/bash
set -euo pipefail

echo "----- InsertDomain callsites (rg -n -C3 'InsertDomain\\(') -----"
rg -nP -C3 'InsertDomain\(' || true

echo
echo "----- Context: go/apps/ctrl/services/deployment/deploy_workflow.go (lines ~280-310) -----"
nl -ba -w3 -s': ' go/apps/ctrl/services/deployment/deploy_workflow.go | sed -n '270,310p'

echo
echo "----- Context: go/apps/ctrl/run.go (lines ~310-330) -----"
nl -ba -w3 -s': ' go/apps/ctrl/run.go | sed -n '310,330p'

echo
echo "----- Any InsertDomainParams initializers (rg -n -C3 'InsertDomainParams\\s*\\{') -----"
rg -nP -C3 'InsertDomainParams\s*\{' --type=go || true

echo
echo "----- create_deployment.go contexts where EnvironmentID is set (rg matches) -----"
nl -ba -w3 -s': ' go/apps/ctrl/services/deployment/create_deployment.go | sed -n '100,170p'

Length of output: 2220


Populate environment_id on all InsertDomain calls

Insertion order/new column are correct; deploy_workflow.go already sets EnvironmentID, but go/apps/ctrl/run.go (InsertDomain at ~line 322) does not — add EnvironmentID (sql.NullString) there and audit other InsertDomain initializers to avoid NULL environment_id rows.

🤖 Prompt for AI Agents
In go/pkg/db/queries/domain_insert.sql around lines 6-7 and in
go/apps/ctrl/run.go at about line 322, InsertDomain calls currently omit the new
environment_id column causing NULLs; update the InsertDomain initializer in
go/apps/ctrl/run.go to pass EnvironmentID as a sql.NullString (populated the
same way deploy_workflow.go does), ensure the struct literal or argument list
includes EnvironmentID, import/database call types are correct, and then audit
other places that call InsertDomain to add EnvironmentID so no INSERTs leave
environment_id NULL.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants