Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Repository UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds control-plane App/Project services and durable Restate workers for cascade deletes, extends the DB layer with many sqlc-generated queries (delete/list/count/insert/bulk), centralizes bearer auth, updates API wiring, and modifies the dashboard to delegate create/delete to the control-plane. Integration test for end-to-end cleanup added. Changes
Sequence DiagramssequenceDiagram
participant Web as Web Dashboard
participant TRPC as TRPC Router
participant Ctrl as Ctrl API<br/>(AppService/ProjectService)
participant Restate as Restate Hydra<br/>(Workers)
participant DB as Database
rect rgba(100,150,200,0.5)
Note over Web,DB: Project Creation Flow
Web->>TRPC: createProject(name,slug)
TRPC->>Ctrl: ProjectService.CreateProject(workspaceId,name,slug)
Ctrl->>DB: check slug / insert project
Ctrl->>Ctrl: call AppService.CreateApp(projectId,...)
Ctrl->>DB: insert app + environments + settings (transaction)
Ctrl-->>TRPC: CreateProjectResponse(projectId)
TRPC-->>Web: Project created
end
sequenceDiagram
participant Web as Web Dashboard
participant TRPC as TRPC Router
participant Ctrl as Ctrl API<br/>(ProjectService)
participant Restate as Restate Hydra<br/>(Workers)
participant DB as Database
rect rgba(200,100,100,0.5)
Note over Web,DB: Project Deletion Flow (Durable)
Web->>TRPC: deleteProject(projectId)
TRPC->>Ctrl: ProjectService.DeleteProject(projectId)
Ctrl->>Restate: Enqueue ProjectService.Delete(projectId)
Ctrl-->>TRPC: DeleteProjectResponse (accepted)
TRPC-->>Web: Deletion started
par Async Durable Workflow
Restate->>DB: ListAppIdsByProject(projectId)
DB-->>Restate: [appId...]
loop For each app
Restate->>Restate: AppService.Delete(appId)
Restate->>Restate: EnvironmentService.Delete(envId)
Restate->>DB: Delete*ByEnvironmentId(...)
DB-->>Restate: OK
end
Restate->>DB: DeleteProjectById(projectId)
DB-->>Restate: Project deleted
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Warning There were issues while running some tools. Please review the errors and either fix the tool's configuration or disable the tool if it's a critical failure. 🔧 golangci-lint (2.11.3)Command failed Comment Tip You can disable the changed files summary in the walkthrough.Disable the |
There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (5)
svc/ctrl/services/app/service.go (1)
32-39: Explicit zero-value initialization is redundant.
UnimplementedAppServiceHandler{}is already the zero value and will be auto-initialized. Minor nit.Optional simplification
func New(cfg Config) *Service { return &Service{ - UnimplementedAppServiceHandler: ctrlv1connect.UnimplementedAppServiceHandler{}, db: cfg.Database, restate: cfg.Restate, bearer: cfg.Bearer, } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@svc/ctrl/services/app/service.go` around lines 32 - 39, The struct literal in New(cfg Config) explicitly sets UnimplementedAppServiceHandler: ctrlv1connect.UnimplementedAppServiceHandler{}, which is redundant because that field will be the zero value automatically; remove the explicit initialization from the Service composite literal (leave the field out) so the UnimplementedAppServiceHandler field is implicitly zero-initialized while keeping db, restate, and bearer assignments intact.svc/ctrl/worker/environment/delete_handler.go (1)
23-93: Consider extracting deletion steps to reduce repetition.The 12 sequential
restate.RunVoidblocks follow identical structure. A table-driven approach could reduce boilerplate.Optional: table-driven deletion
type deleteStep struct { name string fn func(ctx context.Context, db db.DBTX, id string) error } steps := []deleteStep{ {"delete network policies", db.Query.DeleteCiliumNetworkPoliciesByEnvironmentId}, {"delete sentinels", db.Query.DeleteSentinelsByEnvironmentId}, // ... remaining steps } for _, step := range steps { if err := restate.RunVoid(ctx, func(runCtx restate.RunContext) error { return step.fn(runCtx, s.db.RW(), envID) }, restate.WithName(step.name)); err != nil { return nil, fmt.Errorf("%s: %w", step.name, err) } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@svc/ctrl/worker/environment/delete_handler.go` around lines 23 - 93, Refactor the repeated restate.RunVoid blocks into a table-driven loop: create a slice of steps (e.g., with fields name string and fn func(restate.RunContext, db.DBTX, string) error) containing the functions shown (db.Query.DeleteCiliumNetworkPoliciesByEnvironmentId, DeleteSentinelsByEnvironmentId, DeleteCustomDomainsByEnvironmentId, DeleteFrontlineRoutesByEnvironmentId, DeleteAppEnvVarsByEnvironmentId, DeleteAppRegionalSettingsByEnvironmentId, DeleteAppBuildSettingsByEnvironmentId, DeleteAppRuntimeSettingsByEnvironmentId, DeleteDeploymentStepsByEnvironmentId, DeleteDeploymentTopologiesByEnvironmentId, DeleteDeploymentsByEnvironmentId, DeleteEnvironmentById) and then loop over them calling restate.RunVoid(ctx, func(runCtx restate.RunContext) error { return step.fn(runCtx, s.db.RW(), envID) }, restate.WithName(step.name)) and return fmt.Errorf("%s: %w", step.name, err) on error; this preserves restate.RunVoid and restate.WithName usage while removing boilerplate.pkg/db/queries/app_list_ids_by_project.sql (1)
2-2: Add a dedicated index onapps.project_idfor optimal query performance.The query currently relies on the composite unique constraint
apps_project_slug_idx(project_id, slug), which covers the lookup but is not optimized for single-column access. Other tables in the schema (e.g.,deployments,environments,custom_domains) have dedicated indexes forproject_idlookups. Add a standalone index to match the pattern and ensure efficient query execution:CREATE INDEX `apps_project_idx` ON `apps` (`project_id`);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@pkg/db/queries/app_list_ids_by_project.sql` at line 2, Add a standalone index for apps.project_id to optimize the app_list_ids_by_project query: create a migration that adds an index named apps_project_idx on the apps table for the project_id column, ensure the migration is applied/checked in the DB migration tool and include a matching rollback/drop index statement; reference the query file app_list_ids_by_project.sql to verify the new index covers that lookup and ensure no duplicate index already exists before creating it.svc/ctrl/worker/project/service.go (1)
17-20: Optional: renameConfig.DB→Config.Databasefor consistency.Matches sibling service config naming; reduces wiring friction.
Minimal diff
type Config struct { - DB db.Database + Database db.Database } @@ return &Service{ UnimplementedProjectServiceServer: hydrav1.UnimplementedProjectServiceServer{}, - db: cfg.DB, + db: cfg.Database, } }Also applies to: 26-26
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@svc/ctrl/worker/project/service.go` around lines 17 - 20, Rename the Config struct field DB to Database for consistency with sibling service configs: update the field name in the Config type declaration (Config.DB → Config.Database) and then update all references/usages (constructor parameters, struct literals, assignments, and any code accessing Config.DB) to use Config.Database and the corresponding identifier. Ensure the service constructor or NewService function, any dependency injection wiring, and tests are updated to the new name to avoid compile errors.svc/ctrl/worker/run.go (1)
203-211: Consider retry/ingress options for new deletion services.Other services in this file configure retry policies (deploy, customdomain) or private ingress (deployment, routing). The new Project/App/Environment services have no options. This may be intentional, but for deletion workflows you may want:
WithIngressPrivate(true)if these should only be called internally- Retry policies if external failures need handling
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@svc/ctrl/worker/run.go` around lines 203 - 211, The new Project/App/Environment service bindings (restateSrv.Bind(hydrav1.NewProjectServiceServer(workerproject.New(...))), hydrav1.NewAppServiceServer(workerapp.New(...)), hydrav1.NewEnvironmentServiceServer(workerenvironment.New(...))) lack ingress/retry options; update these Bind calls to mirror other services by passing the same transport options used elsewhere—e.g., enable private ingress via WithIngressPrivate(true) if these are internal-only and add a retry policy (the same WithRetry/RetryPolicy used by deploy/customdomain services) to handle transient external failures during deletion workflows; ensure you locate and apply the options where restateSrv.Bind is invoked for the three New*ServiceServer calls.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@svc/ctrl/internal/auth/auth.go`:
- Around line 26-38: Add a guard at the start of Authenticate to fail fast when
the configured token is empty: check the token parameter (passed into
Authenticate) for empty string and return a
connect.NewError(connect.CodeUnauthenticated, fmt.Errorf("empty configured
bearer token")) before performing header parsing or ConstantTimeCompare; update
Authenticate to reference the token variable so an empty configured token cannot
cause []byte("") == []byte("") to authenticate.
- Around line 17-21: The comment for the Request interface incorrectly claims
connect.StreamingHandlerConn satisfies it because StreamingHandlerConn exposes
RequestHeader(), not Header(); either update the comment to remove the streaming
reference (so Request only documents unary *connect.Request usage) or add a thin
adapter type (e.g., streamingRequestAdapter) that wraps
connect.StreamingHandlerConn and implements Header() by delegating to
RequestHeader(), then use that adapter in Authenticate or document its
existence; update the Request interface comment to accurately reflect which
types satisfy it and reference Request, Authenticate,
connect.StreamingHandlerConn, Header(), and RequestHeader() so reviewers can
find the change.
In `@svc/ctrl/proto/hydra/v1/app.proto`:
- Around line 16-21: Rename the RPC messages to conform to Buf naming for the
Delete RPC: change DeleteAppRequest -> DeleteRequest and DeleteAppResponse ->
DeleteResponse, update the rpc signature rpc Delete(DeleteRequest) returns
(DeleteResponse) {}, and update any references/usages of
DeleteAppRequest/DeleteAppResponse across the proto files and generated code
(e.g., client/server stubs) to the new names.
In `@svc/ctrl/proto/hydra/v1/environment.proto`:
- Around line 16-21: Rename the RPC request/response message types to match
Buf's RPC_*_STANDARD_NAME for the Delete RPC: change DeleteEnvironmentRequest to
DeleteRequest and DeleteEnvironmentResponse to DeleteResponse, and update the
rpc Delete(...) signature to use the new message names (rpc
Delete(DeleteRequest) returns (DeleteResponse) {}), ensuring any other
references in the proto or generated code are updated accordingly.
In `@svc/ctrl/proto/hydra/v1/project.proto`:
- Around line 16-21: The RPC method Delete currently uses message types
DeleteProjectRequest and DeleteProjectResponse which violate Buf naming rules;
rename those message types to follow the standard (e.g., DeleteRequest and
DeleteResponse) and update the RPC signature rpc Delete(DeleteRequest) returns
(DeleteResponse) {} as well as any references to
DeleteProjectRequest/DeleteProjectResponse (e.g., in imports or other RPCs) to
use the new names so the RPC follows MethodNameRequest/MethodNameResponse
convention.
In `@svc/ctrl/services/app/delete_app.go`:
- Around line 29-35: The pre-check uses the read-only replica via s.db.RO()
which can lag and cause false-404s; change the FindAppById call to use the
primary DB handle (replace s.db.RO() with the primary/primary-level handle your
DB wrapper exposes, e.g., s.db.RW() or s.db.Primary()) for the lookup of
req.Msg.GetAppId(), or alternatively remove this precheck and ensure the
DeleteApp/delete workflow is idempotent so deletes proceed safely; update the
call site referencing FindAppById, s.db.RO(), and req.Msg.GetAppId()
accordingly.
In `@svc/ctrl/services/project/create_project.go`:
- Around line 38-63: You persist the project via db.Query.InsertProject and then
call s.appService.CreateApp; if CreateApp fails you must compensate to avoid
leaving an orphaned project. Wrap the create+app flow in a DB transaction OR, on
CreateApp error, call your DB delete helper (e.g., db.Query.DeleteProject or
equivalent) with projectID and workspaceID to remove the inserted row, and
return the original CreateApp error (or return a combined error if the
compensating delete also fails); keep the Authorization header forwarding for
the internal CreateApp call unchanged.
In `@svc/ctrl/worker/project/delete_handler.go`:
- Around line 31-36: The loop calling hydrav1.NewAppServiceClient(ctx,
appID).Delete().Send(&hydrav1.DeleteAppRequest{}) ignores the returned error;
update the code in delete_handler.go to capture the result and check the error
from Send(), e.g. call resp, err := appClient.Delete().Send(...), and on err log
or handle it (using the existing logger and include projectID and appID)
consistent with other files like delete_project.go and customdomain/service.go
so transport/HTTP scheduling failures are surfaced.
In
`@web/apps/dashboard/app/`(app)/[workspaceSlug]/projects/_components/list/project-actions.tsx:
- Around line 24-26: The code uses a forbidden non-null assertion on
workspace?.slug! when calling getProjectActionItems; instead explicitly handle
the missing workspace: obtain workspace via useWorkspace(), then either
early-return or set a safe fallback before calling getProjectActionItems (for
example const workspaceSlug = workspace?.slug ?? "" and pass workspaceSlug, or
if actions require a workspace, return an empty menu or loading state when
workspace is null). Update the call site where getProjectActionItems(projectId,
projectName, workspaceSlug, router) is invoked and ensure getProjectActionItems'
signature/logic accepts an empty string or undefined if needed so no `!`
operator is used.
In `@web/apps/dashboard/lib/trpc/routers/deploy/project/delete.ts`:
- Around line 52-56: The audit entry is premature: where insertAuditLogs is
called after queuing the control cascade it should not record a successful
deletion; change the event and description to indicate a request (e.g., use
event "project.deletion_requested" or description "Deletion requested for
${project.id}") instead of "project.delete"/"Deleted ${project.id}", or
alternatively remove this audit here and ensure the worker emits the final
"project.delete" audit when the deletion actually completes; update the call to
insertAuditLogs (used with ctx.workspace.id, ctx.user.id and project.id)
accordingly.
- Around line 37-49: The catch block after calling ctrl.project.deleteProject
currently only checks for TRPCError and turns all other errors into a generic
INTERNAL_SERVER_ERROR; instead detect ConnectError from `@connectrpc/connect`
(import ConnectError), map its .code values (e.g., Code.NotFound,
Code.FailedPrecondition, etc.) to the appropriate TRPC error codes, and rethrow
new TRPCError instances with the mapped code and original message;
alternatively, if your repo requires using the fault helper, route the
ConnectError through the fault conversion function (preserving details) before
throwing. Update the catch to: if (err instanceof TRPCError) throw err; else if
(err instanceof ConnectError) convert err.code to a matching TRPCError code (or
call fault.convert/connect-to-trpc helper) and throw that; otherwise log and
throw INTERNAL_SERVER_ERROR as before.
---
Nitpick comments:
In `@pkg/db/queries/app_list_ids_by_project.sql`:
- Line 2: Add a standalone index for apps.project_id to optimize the
app_list_ids_by_project query: create a migration that adds an index named
apps_project_idx on the apps table for the project_id column, ensure the
migration is applied/checked in the DB migration tool and include a matching
rollback/drop index statement; reference the query file
app_list_ids_by_project.sql to verify the new index covers that lookup and
ensure no duplicate index already exists before creating it.
In `@svc/ctrl/services/app/service.go`:
- Around line 32-39: The struct literal in New(cfg Config) explicitly sets
UnimplementedAppServiceHandler: ctrlv1connect.UnimplementedAppServiceHandler{},
which is redundant because that field will be the zero value automatically;
remove the explicit initialization from the Service composite literal (leave the
field out) so the UnimplementedAppServiceHandler field is implicitly
zero-initialized while keeping db, restate, and bearer assignments intact.
In `@svc/ctrl/worker/environment/delete_handler.go`:
- Around line 23-93: Refactor the repeated restate.RunVoid blocks into a
table-driven loop: create a slice of steps (e.g., with fields name string and fn
func(restate.RunContext, db.DBTX, string) error) containing the functions shown
(db.Query.DeleteCiliumNetworkPoliciesByEnvironmentId,
DeleteSentinelsByEnvironmentId, DeleteCustomDomainsByEnvironmentId,
DeleteFrontlineRoutesByEnvironmentId, DeleteAppEnvVarsByEnvironmentId,
DeleteAppRegionalSettingsByEnvironmentId, DeleteAppBuildSettingsByEnvironmentId,
DeleteAppRuntimeSettingsByEnvironmentId, DeleteDeploymentStepsByEnvironmentId,
DeleteDeploymentTopologiesByEnvironmentId, DeleteDeploymentsByEnvironmentId,
DeleteEnvironmentById) and then loop over them calling restate.RunVoid(ctx,
func(runCtx restate.RunContext) error { return step.fn(runCtx, s.db.RW(), envID)
}, restate.WithName(step.name)) and return fmt.Errorf("%s: %w", step.name, err)
on error; this preserves restate.RunVoid and restate.WithName usage while
removing boilerplate.
In `@svc/ctrl/worker/project/service.go`:
- Around line 17-20: Rename the Config struct field DB to Database for
consistency with sibling service configs: update the field name in the Config
type declaration (Config.DB → Config.Database) and then update all
references/usages (constructor parameters, struct literals, assignments, and any
code accessing Config.DB) to use Config.Database and the corresponding
identifier. Ensure the service constructor or NewService function, any
dependency injection wiring, and tests are updated to the new name to avoid
compile errors.
In `@svc/ctrl/worker/run.go`:
- Around line 203-211: The new Project/App/Environment service bindings
(restateSrv.Bind(hydrav1.NewProjectServiceServer(workerproject.New(...))),
hydrav1.NewAppServiceServer(workerapp.New(...)),
hydrav1.NewEnvironmentServiceServer(workerenvironment.New(...))) lack
ingress/retry options; update these Bind calls to mirror other services by
passing the same transport options used elsewhere—e.g., enable private ingress
via WithIngressPrivate(true) if these are internal-only and add a retry policy
(the same WithRetry/RetryPolicy used by deploy/customdomain services) to handle
transient external failures during deletion workflows; ensure you locate and
apply the options where restateSrv.Bind is invoked for the three
New*ServiceServer calls.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 5c884efa-95fb-4188-aff5-08fa8d084e3e
⛔ Files ignored due to path filters (19)
gen/proto/ctrl/v1/BUILD.bazelis excluded by!**/gen/**gen/proto/ctrl/v1/app.pb.gois excluded by!**/*.pb.go,!**/gen/**gen/proto/ctrl/v1/ctrlv1connect/BUILD.bazelis excluded by!**/gen/**gen/proto/ctrl/v1/ctrlv1connect/app.connect.gois excluded by!**/gen/**gen/proto/ctrl/v1/ctrlv1connect/project.connect.gois excluded by!**/gen/**gen/proto/ctrl/v1/project.pb.gois excluded by!**/*.pb.go,!**/gen/**gen/proto/hydra/v1/BUILD.bazelis excluded by!**/gen/**gen/proto/hydra/v1/app.pb.gois excluded by!**/*.pb.go,!**/gen/**gen/proto/hydra/v1/app_restate.pb.gois excluded by!**/*.pb.go,!**/gen/**gen/proto/hydra/v1/environment.pb.gois excluded by!**/*.pb.go,!**/gen/**gen/proto/hydra/v1/environment_restate.pb.gois excluded by!**/*.pb.go,!**/gen/**gen/proto/hydra/v1/project.pb.gois excluded by!**/*.pb.go,!**/gen/**gen/proto/hydra/v1/project_restate.pb.gois excluded by!**/*.pb.go,!**/gen/**gen/rpc/ctrl/BUILD.bazelis excluded by!**/gen/**gen/rpc/ctrl/app_generated.gois excluded by!**/gen/**gen/rpc/ctrl/project_generated.gois excluded by!**/gen/**go.sumis excluded by!**/*.sumweb/apps/dashboard/gen/proto/ctrl/v1/app_pb.tsis excluded by!**/gen/**web/apps/dashboard/gen/proto/ctrl/v1/project_pb.tsis excluded by!**/gen/**
📒 Files selected for processing (94)
go.modpkg/db/BUILD.bazelpkg/db/app_build_settings_delete_by_environment.sql_generated.gopkg/db/app_delete_by_id.sql_generated.gopkg/db/app_env_var_delete_by_environment.sql_generated.gopkg/db/app_env_var_insert.sql_generated.gopkg/db/app_list_ids_by_project.sql_generated.gopkg/db/app_regional_settings_delete_by_environment.sql_generated.gopkg/db/app_runtime_settings_delete_by_environment.sql_generated.gopkg/db/bulk_app_env_var_insert.sql_generated.gopkg/db/cilium_network_policy_delete_by_environment.sql_generated.gopkg/db/custom_domain_delete_by_environment.sql_generated.gopkg/db/custom_domain_delete_by_project.sql_generated.gopkg/db/deployment_delete_by_environment.sql_generated.gopkg/db/deployment_step_delete_by_environment.sql_generated.gopkg/db/deployment_topology_delete_by_environment.sql_generated.gopkg/db/environment_delete_by_id.sql_generated.gopkg/db/environment_list_ids_by_app.sql_generated.gopkg/db/frontline_route_delete_by_environment.sql_generated.gopkg/db/frontline_route_delete_by_project.sql_generated.gopkg/db/github_repo_connection_delete_by_app.sql_generated.gopkg/db/github_repo_connection_delete_by_project.sql_generated.gopkg/db/instance_count_by_app.sql_generated.gopkg/db/project_delete_by_id.sql_generated.gopkg/db/querier_bulk_generated.gopkg/db/querier_generated.gopkg/db/queries/app_build_settings_delete_by_environment.sqlpkg/db/queries/app_delete_by_id.sqlpkg/db/queries/app_env_var_delete_by_environment.sqlpkg/db/queries/app_env_var_insert.sqlpkg/db/queries/app_list_ids_by_project.sqlpkg/db/queries/app_regional_settings_delete_by_environment.sqlpkg/db/queries/app_runtime_settings_delete_by_environment.sqlpkg/db/queries/cilium_network_policy_delete_by_environment.sqlpkg/db/queries/custom_domain_delete_by_environment.sqlpkg/db/queries/custom_domain_delete_by_project.sqlpkg/db/queries/deployment_delete_by_environment.sqlpkg/db/queries/deployment_step_delete_by_environment.sqlpkg/db/queries/deployment_topology_delete_by_environment.sqlpkg/db/queries/environment_delete_by_id.sqlpkg/db/queries/environment_list_ids_by_app.sqlpkg/db/queries/frontline_route_delete_by_environment.sqlpkg/db/queries/frontline_route_delete_by_project.sqlpkg/db/queries/github_repo_connection_delete_by_app.sqlpkg/db/queries/github_repo_connection_delete_by_project.sqlpkg/db/queries/instance_count_by_app.sqlpkg/db/queries/project_delete_by_id.sqlpkg/db/queries/sentinel_count_by_app.sqlpkg/db/queries/sentinel_count_by_project.sqlpkg/db/queries/sentinel_delete_by_environment.sqlpkg/db/queries/sentinel_delete_by_project.sqlpkg/db/sentinel_count_by_app.sql_generated.gopkg/db/sentinel_count_by_project.sql_generated.gopkg/db/sentinel_delete_by_environment.sql_generated.gopkg/db/sentinel_delete_by_project.sql_generated.gosvc/ctrl/api/BUILD.bazelsvc/ctrl/api/run.gosvc/ctrl/integration/cleanup_test.gosvc/ctrl/internal/auth/BUILD.bazelsvc/ctrl/internal/auth/auth.gosvc/ctrl/proto/ctrl/v1/app.protosvc/ctrl/proto/ctrl/v1/project.protosvc/ctrl/proto/hydra/v1/app.protosvc/ctrl/proto/hydra/v1/environment.protosvc/ctrl/proto/hydra/v1/project.protosvc/ctrl/services/app/BUILD.bazelsvc/ctrl/services/app/auth.gosvc/ctrl/services/app/create_app.gosvc/ctrl/services/app/delete_app.gosvc/ctrl/services/app/service.gosvc/ctrl/services/cluster/BUILD.bazelsvc/ctrl/services/cluster/auth.gosvc/ctrl/services/project/BUILD.bazelsvc/ctrl/services/project/auth.gosvc/ctrl/services/project/create_project.gosvc/ctrl/services/project/delete_project.gosvc/ctrl/services/project/service.gosvc/ctrl/worker/BUILD.bazelsvc/ctrl/worker/app/BUILD.bazelsvc/ctrl/worker/app/delete_handler.gosvc/ctrl/worker/app/service.gosvc/ctrl/worker/environment/BUILD.bazelsvc/ctrl/worker/environment/delete_handler.gosvc/ctrl/worker/environment/service.gosvc/ctrl/worker/project/BUILD.bazelsvc/ctrl/worker/project/delete_handler.gosvc/ctrl/worker/project/service.gosvc/ctrl/worker/run.goweb/apps/dashboard/app/(app)/[workspaceSlug]/projects/_components/list/index.tsxweb/apps/dashboard/app/(app)/[workspaceSlug]/projects/_components/list/project-actions.tsxweb/apps/dashboard/lib/collections/deploy/environment-settings.tsweb/apps/dashboard/lib/trpc/routers/ctrl.tsweb/apps/dashboard/lib/trpc/routers/deploy/project/create.tsweb/apps/dashboard/lib/trpc/routers/deploy/project/delete.ts
web/apps/dashboard/app/(app)/[workspaceSlug]/projects/_components/list/project-actions.tsx
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@svc/ctrl/worker/run.go`:
- Around line 204-213: The three mutation service bindings for ProjectService,
AppService, and EnvironmentService omit the private ingress flag; update the
restateSrv.Bind calls that wrap
hydrav1.NewProjectServiceServer(workerproject.New(...)),
hydrav1.NewAppServiceServer(workerapp.New(...)), and
hydrav1.NewEnvironmentServiceServer(workerenvironment.New(...)) to include
restate.WithIngressPrivate(true) so their Delete mutation handlers are exposed
privately consistent with other control-plane mutation services.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository UI
Review profile: CHILL
Plan: Pro
Run ID: 22e62d2c-d901-4520-8338-b825445ef62e
📒 Files selected for processing (3)
pkg/db/BUILD.bazelpkg/db/querier_generated.gosvc/ctrl/worker/run.go
The pre-check used the read-only replica (s.db.RO()) which can lag behind the primary and cause false 404s on recently created apps. Switch to s.db.RW() to read from the primary.
This is larger than it would seem, but there's a ton of generated code
we could have just used a single transaction to delete everything, but that would be shortsighted.
This way gives us composability and reusability.
If a user wants to just delete an env? Call the env VO
If a user wants to delete an app? Call the app VO (which then delegates to env VO)
etc..
I also want to change this later to do soft-deletes. Instead of deleting stuff right away, we schedule the deletion to Xh in the future and a user can cancel it within that timeframe.
this also moves the creation of projects/apps/envs to the control plane, cause we don't want to duplicate queries in ts if possible