feat(host-service): canonical workspaces.create + host_agent_configs (PR1)#3893
feat(host-service): canonical workspaces.create + host_agent_configs (PR1)#3893saddlepaddle merged 19 commits intomainfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThree planning documents specify: a canonical host-driven ChangesCanonical Workspace Create Flow
Host Agent Configs (V2)
Pane Store Registry (PR 3)
Sequence Diagram(s)sequenceDiagram
participant Renderer
participant HostService
participant AttachStore
participant PaneRegistry
Renderer->>HostService: attachments.upload(file) → attachmentId
HostService->>AttachStore: persist file under stable attachmentId
Renderer->>HostService: workspace.create({mode, launches, attachmentIds, agentIds})
HostService->>AttachStore: resolve attachmentIds → host-local paths
HostService->>HostService: persist workspace row & runtime setup
HostService->>HostService: finalize prompts (append attachment blocks)
HostService-->>Renderer: workspace.create response (workspaceId, launchIds)
Renderer->>PaneRegistry: addLaunchPanes(workspaceId, launchIds)
Renderer->>Renderer: navigate to workspace route (render pane state)
HostService->>HostService: start sessions/runtime for launched panes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR introduces the
Confidence Score: 4/5Safe to merge after wrapping the reorder loop and resetToDefaults delete+insert in transactions. Two P1 data-integrity issues in the backend router (non-atomic reorder and non-atomic reset) should be fixed before merging. All other findings are P2 or lower and do not block merge. packages/host-service/src/trpc/router/settings/agent-configs.ts — the reorder loop and resetToDefaults mutation both need db.transaction() wrappers.
|
| Filename | Overview |
|---|---|
| packages/host-service/src/trpc/router/settings/agent-configs.ts | Core CRUD router for host agent configs; two multi-step mutations (reorder, resetToDefaults) are not wrapped in transactions, risking partial writes on interruption. |
| apps/desktop/src/renderer/routes/_authenticated/settings/agents/components/AgentsSettings/components/V2AgentsSettings/V2AgentsSettings.tsx | New V2 settings UI component with full CRUD and reorder; minor UX issue where card header shows saved label instead of draft label while editing. |
| packages/host-service/src/trpc/router/settings/agent-presets.ts | Hardcoded preset definitions with helpers for default seed filtering and lookup; straightforward and correct. |
| packages/host-service/src/db/schema.ts | Adds host_agent_configs table with appropriate columns, display_order index, and timestamp defaults; matches the migration SQL. |
| packages/host-service/drizzle/0004_add_host_agent_configs.sql | Clean migration adding the host_agent_configs table and display_order index; consistent with schema definition. |
| packages/host-service/src/trpc/router/settings/agent-configs.test.ts | 14 tests covering seeding, add/update/remove/reorder/reset with in-memory SQLite; good coverage of validation and rejection cases. |
| apps/desktop/src/renderer/routes/_authenticated/settings/agents/components/AgentsSettings/AgentsSettings.tsx | Clean V1/V2 flag-based split; existing V1AgentsSettings logic unchanged. |
| packages/host-service/src/trpc/router/settings/index.ts | Thin settings router composing agentConfigsRouter; re-exports HostAgentConfigDto and preset types for external consumers. |
Sequence Diagram
sequenceDiagram
participant UI as V2AgentsSettings (Renderer)
participant Host as host-service tRPC
participant DB as host.db (SQLite)
UI->>Host: settings.agentConfigs.list()
Host->>DB: SELECT * FROM host_agent_configs ORDER BY display_order
alt table empty (first call)
Host->>DB: INSERT seeds (claude, amp, codex, gemini, copilot)
DB-->>Host: 5 rows
end
DB-->>Host: rows
Host-->>UI: HostAgentConfigDto[]
UI->>Host: settings.agentConfigs.add({ presetId })
Host->>DB: INSERT new row with next display_order
DB-->>Host: inserted row
Host-->>UI: HostAgentConfigDto
UI->>Host: settings.agentConfigs.update({ id, patch })
Host->>DB: SELECT row by id
Host->>DB: UPDATE label/launchCommand/promptInput/updatedAt
Host->>DB: SELECT row by id (read-back)
Host-->>UI: HostAgentConfigDto
UI->>Host: settings.agentConfigs.reorder({ ids })
Host->>DB: SELECT all (validate ids)
loop N individual UPDATEs (no transaction)
Host->>DB: UPDATE display_order WHERE id = ?
end
Host-->>UI: HostAgentConfigDto[]
UI->>Host: settings.agentConfigs.resetToDefaults()
Host->>DB: DELETE all existing rows
Host->>DB: INSERT seed rows
Host-->>UI: HostAgentConfigDto[]
Prompt To Fix All With AI
Fix the following 4 code review issues. Work through them one at a time, proposing concise fixes.
---
### Issue 1 of 4
packages/host-service/src/trpc/router/settings/agent-configs.ts:230-237
**Non-atomic reorder — partial update on interruption**
The `reorder` loop fires N separate `UPDATE` statements with no enclosing transaction. If the process crashes or the mutation throws between iterations, the `display_order` column is left half-updated — items before the interruption point get their new indices while items after it keep stale values, silently corrupting the persisted order. Wrap the loop in `ctx.db.transaction(tx => { … })` so the entire reorder is applied atomically.
### Issue 2 of 4
packages/host-service/src/trpc/router/settings/agent-configs.ts:242-261
**Non-atomic reset — table transiently empty between delete and insert**
`resetToDefaults` deletes all existing rows and then inserts seeds in two separate statements. If an error (or process crash) occurs after the delete completes but before the insert runs, the `host_agent_configs` table is left empty. While the next `list()` call would re-seed via `seedDefaultsIfEmpty`, the user would silently lose all customisations with no error surfaced at reset time. Both operations should run inside a `ctx.db.transaction()` block so the reset is all-or-nothing.
### Issue 3 of 4
packages/host-service/src/trpc/router/settings/agent-configs.ts:195-203
**`remove()` silently succeeds for unknown IDs**
The delete runs unconditionally and always returns `{ success: true }`, even when no row matched. Callers (and the UI's `onSuccess` toast) can't distinguish a real deletion from a no-op caused by a stale or wrong ID. Consider reading the row first and throwing `NOT_FOUND` when it doesn't exist, consistent with how `update` handles the same scenario.
### Issue 4 of 4
apps/desktop/src/renderer/routes/_authenticated/settings/agents/components/AgentsSettings/components/V2AgentsSettings/V2AgentsSettings.tsx:311-313
**Card title shows stale saved label while editing**
`CardHeader` renders `{config.label}` (the server-persisted prop) rather than `{label}` (the local draft state), so typing in the Label input doesn't update the card title until after Save. This means the card header and the input field show different values simultaneously, which may confuse users trying to preview their edit. Consider using `{label}` in the header, perhaps styled differently to signal an unsaved state.
Reviews (1): Last reviewed commit: "feat(host-service): host agent configs (..." | Re-trigger Greptile
| const now = Date.now(); | ||
| input.ids.forEach((id, index) => { | ||
| ctx.db | ||
| .update(hostAgentConfigs) | ||
| .set({ displayOrder: index, updatedAt: now }) | ||
| .where(eq(hostAgentConfigs.id, id)) | ||
| .run(); | ||
| }); |
There was a problem hiding this comment.
Non-atomic reorder — partial update on interruption
The reorder loop fires N separate UPDATE statements with no enclosing transaction. If the process crashes or the mutation throws between iterations, the display_order column is left half-updated — items before the interruption point get their new indices while items after it keep stale values, silently corrupting the persisted order. Wrap the loop in ctx.db.transaction(tx => { … }) so the entire reorder is applied atomically.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/host-service/src/trpc/router/settings/agent-configs.ts
Line: 230-237
Comment:
**Non-atomic reorder — partial update on interruption**
The `reorder` loop fires N separate `UPDATE` statements with no enclosing transaction. If the process crashes or the mutation throws between iterations, the `display_order` column is left half-updated — items before the interruption point get their new indices while items after it keep stale values, silently corrupting the persisted order. Wrap the loop in `ctx.db.transaction(tx => { … })` so the entire reorder is applied atomically.
How can I resolve this? If you propose a fix, please make it concise.| resetToDefaults: protectedProcedure.mutation(({ ctx }) => { | ||
| const existing = listOrdered(ctx.db); | ||
| if (existing.length > 0) { | ||
| ctx.db | ||
| .delete(hostAgentConfigs) | ||
| .where( | ||
| inArray( | ||
| hostAgentConfigs.id, | ||
| existing.map((row) => row.id), | ||
| ), | ||
| ) | ||
| .run(); | ||
| } | ||
| const seeds = getDefaultSeedPresets().map((preset, index) => | ||
| rowFromPreset(preset, index), | ||
| ); | ||
| if (seeds.length > 0) { | ||
| ctx.db.insert(hostAgentConfigs).values(seeds).run(); | ||
| } | ||
| return listOrdered(ctx.db).map(toOutput); |
There was a problem hiding this comment.
Non-atomic reset — table transiently empty between delete and insert
resetToDefaults deletes all existing rows and then inserts seeds in two separate statements. If an error (or process crash) occurs after the delete completes but before the insert runs, the host_agent_configs table is left empty. While the next list() call would re-seed via seedDefaultsIfEmpty, the user would silently lose all customisations with no error surfaced at reset time. Both operations should run inside a ctx.db.transaction() block so the reset is all-or-nothing.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/host-service/src/trpc/router/settings/agent-configs.ts
Line: 242-261
Comment:
**Non-atomic reset — table transiently empty between delete and insert**
`resetToDefaults` deletes all existing rows and then inserts seeds in two separate statements. If an error (or process crash) occurs after the delete completes but before the insert runs, the `host_agent_configs` table is left empty. While the next `list()` call would re-seed via `seedDefaultsIfEmpty`, the user would silently lose all customisations with no error surfaced at reset time. Both operations should run inside a `ctx.db.transaction()` block so the reset is all-or-nothing.
How can I resolve this? If you propose a fix, please make it concise.| remove: protectedProcedure | ||
| .input(z.object({ id: z.string().min(1) })) | ||
| .mutation(({ ctx, input }) => { | ||
| ctx.db | ||
| .delete(hostAgentConfigs) | ||
| .where(eq(hostAgentConfigs.id, input.id)) | ||
| .run(); | ||
| return { success: true as const }; | ||
| }), |
There was a problem hiding this comment.
remove() silently succeeds for unknown IDs
The delete runs unconditionally and always returns { success: true }, even when no row matched. Callers (and the UI's onSuccess toast) can't distinguish a real deletion from a no-op caused by a stale or wrong ID. Consider reading the row first and throwing NOT_FOUND when it doesn't exist, consistent with how update handles the same scenario.
Prompt To Fix With AI
This is a comment left during a code review.
Path: packages/host-service/src/trpc/router/settings/agent-configs.ts
Line: 195-203
Comment:
**`remove()` silently succeeds for unknown IDs**
The delete runs unconditionally and always returns `{ success: true }`, even when no row matched. Callers (and the UI's `onSuccess` toast) can't distinguish a real deletion from a no-op caused by a stale or wrong ID. Consider reading the row first and throwing `NOT_FOUND` when it doesn't exist, consistent with how `update` handles the same scenario.
How can I resolve this? If you propose a fix, please make it concise.| value={label} | ||
| onChange={(e) => setLabel(e.target.value)} | ||
| /> |
There was a problem hiding this comment.
Card title shows stale saved label while editing
CardHeader renders {config.label} (the server-persisted prop) rather than {label} (the local draft state), so typing in the Label input doesn't update the card title until after Save. This means the card header and the input field show different values simultaneously, which may confuse users trying to preview their edit. Consider using {label} in the header, perhaps styled differently to signal an unsaved state.
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/desktop/src/renderer/routes/_authenticated/settings/agents/components/AgentsSettings/components/V2AgentsSettings/V2AgentsSettings.tsx
Line: 311-313
Comment:
**Card title shows stale saved label while editing**
`CardHeader` renders `{config.label}` (the server-persisted prop) rather than `{label}` (the local draft state), so typing in the Label input doesn't update the card title until after Save. This means the card header and the input field show different values simultaneously, which may confuse users trying to preview their edit. Consider using `{label}` in the header, perhaps styled differently to signal an unsaved state.
How can I resolve this? If you propose a fix, please make it concise.
🧹 Preview Cleanup CompleteThe following preview resources have been cleaned up:
Thank you for your contribution! 🎉 |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (1)
apps/desktop/src/renderer/routes/_authenticated/settings/agents/components/AgentsSettings/AgentsSettings.tsx (1)
23-69: ⚡ Quick winMove
V1AgentsSettingsinto its own file.This file now contains multiple components; extract
V1AgentsSettings(and optionally its local types) into a separate component file to match repo structure rules.As per coding guidelines,
**/*.{tsx,ts}: Do not create multi-component files; use one component per file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/routes/_authenticated/settings/agents/components/AgentsSettings/AgentsSettings.tsx` around lines 23 - 69, Extract the V1AgentsSettings component (and any local types like AgentsSettingsProps) into a new file named appropriately (e.g., V1AgentsSettings.tsx), export it as the default export, and ensure it imports any dependencies used inside (electronTrpc, SETTING_ITEM_ID, isItemVisible, AgentCard, etc.). In the original file, remove the V1AgentsSettings implementation and replace it with a single import of the new component, and update any relative import paths where the component or its props are consumed. Keep the component behavior identical (props, hooks, and JSX) and ensure TypeScript types are moved or exported so callers still compile.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/agents/components/AgentsSettings/components/V2AgentsSettings/V2AgentsSettings.tsx`:
- Around line 69-113: Capture the host URL at mutation time and use it when
invalidating so the mutated host's cache is always refreshed: for each mutation
(addMutation, removeMutation, reorderMutation, resetMutation) add an onMutate
that reads and returns the current activeHostUrl (e.g. return { hostUrl:
activeHostUrl }) and change onSuccess to invalidate using the context host (e.g.
queryClient.invalidateQueries({ queryKey: agentConfigsKey(context?.hostUrl ??
activeHostUrl) })); keep the existing host-not-available guard in mutationFn and
keep error/success toasts as-is.
- Around line 49-67: The current useQuery handlers for agentConfigsKey and
presetsKey are swallowing failures by returning [] (in the queryFn) which turns
host/service errors into empty-state UI; update both queries (the ones keyed by
agentConfigsKey and presetsKey that call getHostServiceClientByUrl(...) and then
client.settings.agentConfigs.list.query() / listPresets.query()) to let errors
propagate (remove the fallback "return []" for failure cases), add/consume
isError and error (and refetch) from useQuery, and render an explicit error +
retry UI when isError is true; also add optional onError callbacks to the
queries to log/report the error. Apply the same change pattern to the other
occurrences noted (the blocks at the mentioned locations).
In `@packages/host-service/src/trpc/router/settings/agent-configs.ts`:
- Around line 75-83: The seedDefaultsIfEmpty helper currently does a non-atomic
read-then-insert (calls listOrdered then db.insert(...).run()) which can
double-insert under concurrency; change it to perform the empty-check and insert
inside a single database transaction: begin a transaction on the HostDb, re-run
listOrdered(db) (or equivalent SELECT) inside the transaction to confirm still
empty, and only then call db.insert(hostAgentConfigs).values(seeds).run() within
that same transaction (rollback / do nothing if re-check shows rows). Keep the
function name seedDefaultsIfEmpty and the seed construction via
getDefaultSeedPresets().map(rowFromPreset, index) but ensure the insert is
guarded by the transactional re-check.
- Around line 118-124: The add() flow computes nextOrder from listOrdered(...)
then inserts via ctx.db.insert(hostAgentConfigs).values(insert).run(), which
allows race conditions; wrap the compute-and-insert in a single transaction or
use a DB-level uniqueness constraint on displayOrder with a retry-on-conflict.
Specifically, modify add() to begin a transaction, select/lock the current rows
(e.g., FOR UPDATE or equivalent), compute nextOrder, create the insert via
rowFromPreset(preset, nextOrder), then perform ctx.db.insert(...).run() and
commit; alternatively add a unique index on hostAgentConfigs.displayOrder and
implement a retry loop that recomputes nextOrder and re-inserts on conflict.
- Around line 149-159: The schema for the agent patch allows whitespace-only
strings because z.string().min(1) doesn't trim; update the object schema for
label and launchCommand to trim whitespace before validating (e.g., use
z.string().transform(s => s.trim()).min(1).optional() or z.string().refine(s =>
s.trim().length > 0, ...) wrapped as optional) so pure-space values are
rejected, keep promptInputSchema.optional() as-is, and preserve the existing
refine that requires at least one field to be present.
In `@plans/20260425-canonical-workspace-create-flow.md`:
- Around line 68-76: The schema for launches is inconsistent: the agent variant
(kind: "agent", agentId) currently marks prompt as optional while the design
text requires agent launches to include a prompt and says promptless launches
must use the terminal path; pick one contract and make both places match. Update
the launches type and the design text so that either (A) agent launches require
prompt (make prompt non-optional on the agent object and remove any language
allowing promptless agent launches) or (B) allow prompt to be optional
consistently (adjust design text to permit agent launches without prompt and
clarify terminal-only path usage), and ensure all mentions (the launches union,
the { kind: "agent"; agentId; prompt } shape, and the explanatory paragraphs)
reference the same rule.
In `@plans/20260425-host-agent-configs-pr1.md`:
- Around line 5-9: The plan language incorrectly implies PR1 has already moved
the V2 modal and launch flow to host-owned configs; update the prose so it
clearly marks modal/launch integration as follow-up work outside PR1 scope by
changing references to the "V2 new workspace modal", "modal/launch flow", and
any statements asserting the host resolves agentIds for launches to indicate
those integrations remain deferred in the staged rollout and still read legacy
desktop presets; apply the same wording tightening to the later section that
mirrors this claim (the paragraph(s) that discuss the host-owned configs, V2
Agents settings UI, and migration of legacy desktop presets) so PR1 scope and
follow-up work are unambiguous.
---
Nitpick comments:
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/agents/components/AgentsSettings/AgentsSettings.tsx`:
- Around line 23-69: Extract the V1AgentsSettings component (and any local types
like AgentsSettingsProps) into a new file named appropriately (e.g.,
V1AgentsSettings.tsx), export it as the default export, and ensure it imports
any dependencies used inside (electronTrpc, SETTING_ITEM_ID, isItemVisible,
AgentCard, etc.). In the original file, remove the V1AgentsSettings
implementation and replace it with a single import of the new component, and
update any relative import paths where the component or its props are consumed.
Keep the component behavior identical (props, hooks, and JSX) and ensure
TypeScript types are moved or exported so callers still compile.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f2de2915-e6b1-4e76-8dca-72fbb6d19476
📒 Files selected for processing (15)
apps/desktop/src/renderer/routes/_authenticated/settings/agents/components/AgentsSettings/AgentsSettings.tsxapps/desktop/src/renderer/routes/_authenticated/settings/agents/components/AgentsSettings/components/V2AgentsSettings/V2AgentsSettings.tsxapps/desktop/src/renderer/routes/_authenticated/settings/agents/components/AgentsSettings/components/V2AgentsSettings/index.tspackages/host-service/drizzle/0004_add_host_agent_configs.sqlpackages/host-service/drizzle/meta/0004_snapshot.jsonpackages/host-service/drizzle/meta/_journal.jsonpackages/host-service/package.jsonpackages/host-service/src/db/schema.tspackages/host-service/src/trpc/router/router.tspackages/host-service/src/trpc/router/settings/agent-configs.test.tspackages/host-service/src/trpc/router/settings/agent-configs.tspackages/host-service/src/trpc/router/settings/agent-presets.tspackages/host-service/src/trpc/router/settings/index.tsplans/20260425-canonical-workspace-create-flow.mdplans/20260425-host-agent-configs-pr1.md
| const { data: configs, isLoading } = useQuery({ | ||
| queryKey: agentConfigsKey(activeHostUrl), | ||
| enabled: !!activeHostUrl, | ||
| queryFn: async () => { | ||
| if (!activeHostUrl) return []; | ||
| const client = getHostServiceClientByUrl(activeHostUrl); | ||
| return client.settings.agentConfigs.list.query(); | ||
| }, | ||
| }); | ||
|
|
||
| const { data: presets } = useQuery({ | ||
| queryKey: presetsKey(activeHostUrl), | ||
| enabled: !!activeHostUrl, | ||
| queryFn: async () => { | ||
| if (!activeHostUrl) return []; | ||
| const client = getHostServiceClientByUrl(activeHostUrl); | ||
| return client.settings.agentConfigs.listPresets.query(); | ||
| }, | ||
| }); |
There was a problem hiding this comment.
Handle fetch failures separately from the empty state.
If list() fails, isLoading flips to false and the component renders “No agents configured,” which masks a host-service error as missing data. listPresets() failure also silently turns the add menu into an empty list. Please render an explicit error/retry state for query failures instead of falling through to the empty state.
Also applies to: 147-163, 193-200
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/agents/components/AgentsSettings/components/V2AgentsSettings/V2AgentsSettings.tsx`
around lines 49 - 67, The current useQuery handlers for agentConfigsKey and
presetsKey are swallowing failures by returning [] (in the queryFn) which turns
host/service errors into empty-state UI; update both queries (the ones keyed by
agentConfigsKey and presetsKey that call getHostServiceClientByUrl(...) and then
client.settings.agentConfigs.list.query() / listPresets.query()) to let errors
propagate (remove the fallback "return []" for failure cases), add/consume
isError and error (and refetch) from useQuery, and render an explicit error +
retry UI when isError is true; also add optional onError callbacks to the
queries to log/report the error. Apply the same change pattern to the other
occurrences noted (the blocks at the mentioned locations).
| const invalidate = () => | ||
| queryClient.invalidateQueries({ queryKey: agentConfigsKey(activeHostUrl) }); | ||
|
|
||
| const addMutation = useMutation({ | ||
| mutationFn: async (presetId: string) => { | ||
| if (!activeHostUrl) throw new Error("Host service not available"); | ||
| const client = getHostServiceClientByUrl(activeHostUrl); | ||
| return client.settings.agentConfigs.add.mutate({ presetId }); | ||
| }, | ||
| onSuccess: () => invalidate(), | ||
| onError: (err) => toast.error(err.message), | ||
| }); | ||
|
|
||
| const removeMutation = useMutation({ | ||
| mutationFn: async (id: string) => { | ||
| if (!activeHostUrl) throw new Error("Host service not available"); | ||
| const client = getHostServiceClientByUrl(activeHostUrl); | ||
| return client.settings.agentConfigs.remove.mutate({ id }); | ||
| }, | ||
| onSuccess: () => invalidate(), | ||
| onError: (err) => toast.error(err.message), | ||
| }); | ||
|
|
||
| const reorderMutation = useMutation({ | ||
| mutationFn: async (ids: string[]) => { | ||
| if (!activeHostUrl) throw new Error("Host service not available"); | ||
| const client = getHostServiceClientByUrl(activeHostUrl); | ||
| return client.settings.agentConfigs.reorder.mutate({ ids }); | ||
| }, | ||
| onSuccess: () => invalidate(), | ||
| onError: (err) => toast.error(err.message), | ||
| }); | ||
|
|
||
| const resetMutation = useMutation({ | ||
| mutationFn: async () => { | ||
| if (!activeHostUrl) throw new Error("Host service not available"); | ||
| const client = getHostServiceClientByUrl(activeHostUrl); | ||
| return client.settings.agentConfigs.resetToDefaults.mutate(); | ||
| }, | ||
| onSuccess: () => { | ||
| invalidate(); | ||
| toast.success("Reset to default agents"); | ||
| }, | ||
| onError: (err) => toast.error(err.message), | ||
| }); |
There was a problem hiding this comment.
Invalidate the cache for the host that actually mutated.
These success handlers derive the invalidation key from the current activeHostUrl. If the user switches hosts while a mutation is in flight, the completion can invalidate the new host and leave the mutated host stale until a later manual refresh. Capture the host URL at mutate time and invalidate that specific key in onSuccess.
Also applies to: 255-268
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/agents/components/AgentsSettings/components/V2AgentsSettings/V2AgentsSettings.tsx`
around lines 69 - 113, Capture the host URL at mutation time and use it when
invalidating so the mutated host's cache is always refreshed: for each mutation
(addMutation, removeMutation, reorderMutation, resetMutation) add an onMutate
that reads and returns the current activeHostUrl (e.g. return { hostUrl:
activeHostUrl }) and change onSuccess to invalidate using the context host (e.g.
queryClient.invalidateQueries({ queryKey: agentConfigsKey(context?.hostUrl ??
activeHostUrl) })); keep the existing host-not-available guard in mutationFn and
keep error/success toasts as-is.
| function seedDefaultsIfEmpty(db: HostDb): HostAgentConfigRow[] { | ||
| const existing = listOrdered(db); | ||
| if (existing.length > 0) return existing; | ||
| const seeds = getDefaultSeedPresets().map((preset, index) => | ||
| rowFromPreset(preset, index), | ||
| ); | ||
| if (seeds.length === 0) return existing; | ||
| db.insert(hostAgentConfigs).values(seeds).run(); | ||
| return listOrdered(db); |
There was a problem hiding this comment.
Make default seeding atomic.
This helper does a read-then-insert with no transaction. Two concurrent first list() calls can both observe an empty table and insert the bundled defaults twice, which is especially easy here because duplicate presetId rows are otherwise valid. Please wrap the empty-check and insert in a single transaction (and re-check inside it).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/host-service/src/trpc/router/settings/agent-configs.ts` around lines
75 - 83, The seedDefaultsIfEmpty helper currently does a non-atomic
read-then-insert (calls listOrdered then db.insert(...).run()) which can
double-insert under concurrency; change it to perform the empty-check and insert
inside a single database transaction: begin a transaction on the HostDb, re-run
listOrdered(db) (or equivalent SELECT) inside the transaction to confirm still
empty, and only then call db.insert(hostAgentConfigs).values(seeds).run() within
that same transaction (rollback / do nothing if re-check shows rows). Keep the
function name seedDefaultsIfEmpty and the seed construction via
getDefaultSeedPresets().map(rowFromPreset, index) but ensure the insert is
guarded by the transactional re-check.
| const existing = listOrdered(ctx.db); | ||
| const nextOrder = | ||
| existing.length === 0 | ||
| ? 0 | ||
| : Math.max(...existing.map((row) => row.displayOrder)) + 1; | ||
| const insert = rowFromPreset(preset, nextOrder); | ||
| ctx.db.insert(hostAgentConfigs).values(insert).run(); |
There was a problem hiding this comment.
Serialize displayOrder assignment in add().
nextOrder is derived from a snapshot of the current rows, so concurrent add() calls can pick the same value and persist duplicate displayOrders. That makes persisted ordering ambiguous. Compute the next order and insert under one transaction, or enforce uniqueness and retry on conflict.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/host-service/src/trpc/router/settings/agent-configs.ts` around lines
118 - 124, The add() flow computes nextOrder from listOrdered(...) then inserts
via ctx.db.insert(hostAgentConfigs).values(insert).run(), which allows race
conditions; wrap the compute-and-insert in a single transaction or use a
DB-level uniqueness constraint on displayOrder with a retry-on-conflict.
Specifically, modify add() to begin a transaction, select/lock the current rows
(e.g., FOR UPDATE or equivalent), compute nextOrder, create the insert via
rowFromPreset(preset, nextOrder), then perform ctx.db.insert(...).run() and
commit; alternatively add a unique index on hostAgentConfigs.displayOrder and
implement a retry loop that recomputes nextOrder and re-inserts on conflict.
| .object({ | ||
| label: z.string().min(1).optional(), | ||
| launchCommand: z.string().min(1).optional(), | ||
| promptInput: promptInputSchema.optional(), | ||
| }) | ||
| .refine( | ||
| (patch) => | ||
| patch.label !== undefined || | ||
| patch.launchCommand !== undefined || | ||
| patch.promptInput !== undefined, | ||
| { message: "Patch must update at least one field" }, |
There was a problem hiding this comment.
Reject whitespace-only edits.
z.string().min(1) still accepts values like " ". That lets a blank launchCommand be saved and only fail much later when the agent is launched. label has the same problem. Trim before enforcing non-empty.
Suggested validation change
patch: z
.object({
- label: z.string().min(1).optional(),
- launchCommand: z.string().min(1).optional(),
+ label: z.string().trim().min(1).optional(),
+ launchCommand: z.string().trim().min(1).optional(),
promptInput: promptInputSchema.optional(),
})📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| .object({ | |
| label: z.string().min(1).optional(), | |
| launchCommand: z.string().min(1).optional(), | |
| promptInput: promptInputSchema.optional(), | |
| }) | |
| .refine( | |
| (patch) => | |
| patch.label !== undefined || | |
| patch.launchCommand !== undefined || | |
| patch.promptInput !== undefined, | |
| { message: "Patch must update at least one field" }, | |
| .object({ | |
| label: z.string().trim().min(1).optional(), | |
| launchCommand: z.string().trim().min(1).optional(), | |
| promptInput: promptInputSchema.optional(), | |
| }) | |
| .refine( | |
| (patch) => | |
| patch.label !== undefined || | |
| patch.launchCommand !== undefined || | |
| patch.promptInput !== undefined, | |
| { message: "Patch must update at least one field" }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@packages/host-service/src/trpc/router/settings/agent-configs.ts` around lines
149 - 159, The schema for the agent patch allows whitespace-only strings because
z.string().min(1) doesn't trim; update the object schema for label and
launchCommand to trim whitespace before validating (e.g., use
z.string().transform(s => s.trim()).min(1).optional() or z.string().refine(s =>
s.trim().length > 0, ...) wrapped as optional) so pure-space values are
rejected, keep promptInputSchema.optional() as-is, and preserve the existing
refine that requires at least one field to be present.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/desktop/src/renderer/routes/_authenticated/_dashboard/pending/$pendingId/page.tsx (1)
184-205:⚠️ Potential issue | 🟠 Major | ⚡ Quick winKeep V2 agent-config failures out of the workspace-creation failure path.
This host-service lookup now shares the same
try/catchas the create/checkout mutation, so a transient config fetch issue will mark the pending row failed after the workspace already exists. Please isolate launch-resolution failures from the create success path so users can still reach the workspace and retry launch separately.♻️ Suggested shape of the split
- const agentConfigs = isV2CloudEnabled - ? await fetchV2HostAgentConfigs(activeHostUrl) - : await trpcUtils.settings.getAgentPresets.fetch(); - await dispatchForkLaunch({ - workspaceId: result.workspace.id, - pending, - loadedAttachments, - agentConfigs, - activeHostUrl, - activeOrganizationId, - resolvedPr, - onApplyToRow: (patch) => { - collections.pendingWorkspaces.update(pendingId, (draft) => { - if (patch.terminalLaunch !== undefined) { - draft.terminalLaunch = patch.terminalLaunch; - } - if (patch.chatLaunch !== undefined) { - draft.chatLaunch = patch.chatLaunch; - } - }); - }, - }); + try { + const agentConfigs = isV2CloudEnabled + ? await fetchV2HostAgentConfigs(activeHostUrl) + : await trpcUtils.settings.getAgentPresets.fetch(); + await dispatchForkLaunch({ + workspaceId: result.workspace.id, + pending, + loadedAttachments, + agentConfigs, + activeHostUrl, + activeOrganizationId, + resolvedPr, + onApplyToRow: (patch) => { + collections.pendingWorkspaces.update( + pendingId, + (draft) => { + if (patch.terminalLaunch !== undefined) { + draft.terminalLaunch = patch.terminalLaunch; + } + if (patch.chatLaunch !== undefined) { + draft.chatLaunch = patch.chatLaunch; + } + }, + ); + }, + }); + } catch { + // Keep the workspace creation result and surface launch failure separately. + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/routes/_authenticated/_dashboard/pending/`$pendingId/page.tsx around lines 184 - 205, The agent-config fetch and launch resolution (calls to fetchV2HostAgentConfigs or trpcUtils.settings.getAgentPresets.fetch and dispatchForkLaunch) are currently inside the same try/catch as the workspace create/checkout flow so a transient config failure can mark the pending row failed even after the workspace was created; extract the agentConfigs fetch + dispatchForkLaunch into its own try/catch after the create/checkout succeeds so errors there do not propagate to the workspace-creation failure path, handle failures by falling back to undefined/empty agentConfigs or logging the error and updating only the pending launch state (e.g., via collections.pendingWorkspaces.update and onApplyToRow) without throwing to the outer catch, and ensure dispatchForkLaunch still receives the same args (workspaceId, pending, loadedAttachments, agentConfigs, activeHostUrl, activeOrganizationId, resolvedPr, onApplyToRow).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In
`@apps/desktop/src/renderer/routes/_authenticated/_dashboard/pending/`$pendingId/page.tsx:
- Around line 184-205: The agent-config fetch and launch resolution (calls to
fetchV2HostAgentConfigs or trpcUtils.settings.getAgentPresets.fetch and
dispatchForkLaunch) are currently inside the same try/catch as the workspace
create/checkout flow so a transient config failure can mark the pending row
failed even after the workspace was created; extract the agentConfigs fetch +
dispatchForkLaunch into its own try/catch after the create/checkout succeeds so
errors there do not propagate to the workspace-creation failure path, handle
failures by falling back to undefined/empty agentConfigs or logging the error
and updating only the pending launch state (e.g., via
collections.pendingWorkspaces.update and onApplyToRow) without throwing to the
outer catch, and ensure dispatchForkLaunch still receives the same args
(workspaceId, pending, loadedAttachments, agentConfigs, activeHostUrl,
activeOrganizationId, resolvedPr, onApplyToRow).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 93b182e0-fd95-4f2b-97ce-1a44fedada9a
📒 Files selected for processing (3)
apps/desktop/src/renderer/hooks/useEnabledAgents/hostConfigShim.tsapps/desktop/src/renderer/hooks/useEnabledAgents/useEnabledAgents.tsapps/desktop/src/renderer/routes/_authenticated/_dashboard/pending/$pendingId/page.tsx
dbe55a9 to
2e7d2ad
Compare
… M:1
- Client-side prompt builder fetches PR/GitHub-issue/internal-task bodies
optimistically on link-time and stitches them into the agent prompt at
submit. Bodies are cached in a module-scoped Zustand store keyed by
source:id so close-and-reopen of the modal reuses fetched content. The
agent now spawns whenever ANY context (typed prompt, linked PR, linked
issue, linked task, attachments) is present, and the host-side schema
is relaxed to allow attachment-only launches.
- Split the GH content fetches off the workspace-creation router into
pullRequests.getContent and a new issues.getContent (issuesRouter) so
the names reflect what they are (general PR/issue body fetches) rather
than what flow first needed them.
- Replace the workspace_tasks join table with a nullable task_id FK
column on v2_workspaces. The product is one-task-per-workspace; M:N
was over-engineered. Collapses linkTask + unlinkTask into one
setTask({ workspaceId, taskId | null }) mutation. Updates the renderer,
host service, mcp-v2, sdk, and trpc routers accordingly. Migration
0044 regenerated as add_task_id_to_v2_workspaces.
- Drop completed plan / spec docs that were carried on this branch.
There was a problem hiding this comment.
3 issues found across 32 files (changes from recent commits).
You’re at about 92% of the monthly review limit. You may want to disable incremental reviews to conserve quota. Reviews will continue until that limit is exceeded. If you need help avoiding interruptions, please contact contact@cubic.dev.
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="packages/host-service/src/trpc/router/pull-requests/procedures/get-content.ts">
<violation number="1">
P2: The PR content cache can grow unbounded because expired entries are never evicted unless the same key is requested again.</violation>
</file>
<file name="packages/host-service/src/trpc/router/issues/procedures/get-content.ts">
<violation number="1" location="packages/host-service/src/trpc/router/issues/procedures/get-content.ts:18">
P2: Allow `author` to be nullable in the `gh issue` response schema; otherwise issues with null authors will fail parsing and return 500.</violation>
</file>
<file name="apps/desktop/src/renderer/stores/new-workspace-prompt-context/useNewWorkspacePromptContext.ts">
<violation number="1" location="apps/desktop/src/renderer/stores/new-workspace-prompt-context/useNewWorkspacePromptContext.ts:54">
P2: Prompt-context cache keys are not scoped by project/host, so reusing the same PR/issue number across projects can reuse stale body content.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| @@ -1,11 +1,29 @@ | |||
| import { TRPCError } from "@trpc/server"; | |||
There was a problem hiding this comment.
P2: The PR content cache can grow unbounded because expired entries are never evicted unless the same key is requested again.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/host-service/src/trpc/router/pull-requests/procedures/get-content.ts, line 108:
<comment>The PR content cache can grow unbounded because expired entries are never evicted unless the same key is requested again.</comment>
<file context>
@@ -0,0 +1,110 @@
+ pullRequestContentCache.delete(cacheKey);
+ }
+ });
+ pullRequestContentCache.set(cacheKey, { promise, fetchedAt });
+ return promise;
+ });
</file context>
| body: z.string().nullable().optional(), | ||
| url: z.string(), | ||
| state: z.string(), | ||
| author: z.object({ login: z.string() }).optional(), |
There was a problem hiding this comment.
P2: Allow author to be nullable in the gh issue response schema; otherwise issues with null authors will fail parsing and return 500.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At packages/host-service/src/trpc/router/issues/procedures/get-content.ts, line 18:
<comment>Allow `author` to be nullable in the `gh issue` response schema; otherwise issues with null authors will fail parsing and return 500.</comment>
<file context>
@@ -0,0 +1,57 @@
+ body: z.string().nullable().optional(),
+ url: z.string(),
+ state: z.string(),
+ author: z.object({ login: z.string() }).optional(),
+ createdAt: z.string().optional(),
+ updatedAt: z.string().optional(),
</file context>
|
|
||
| if (linkedPR) { | ||
| const prNumber = linkedPR.prNumber; | ||
| store.register(`pr:${prNumber}`, () => |
There was a problem hiding this comment.
P2: Prompt-context cache keys are not scoped by project/host, so reusing the same PR/issue number across projects can reuse stale body content.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At apps/desktop/src/renderer/stores/new-workspace-prompt-context/useNewWorkspacePromptContext.ts, line 54:
<comment>Prompt-context cache keys are not scoped by project/host, so reusing the same PR/issue number across projects can reuse stale body content.</comment>
<file context>
@@ -0,0 +1,89 @@
+
+ if (linkedPR) {
+ const prNumber = linkedPR.prNumber;
+ store.register(`pr:${prNumber}`, () =>
+ fetchPrBody({ prNumber, projectId, hostUrl }),
+ );
</file context>
# Conflicts: # packages/host-service/src/runtime/pull-requests/pull-requests.ts
PR #3893 dropped \`--quiet\` from \`git rev-parse --verify\`, shifting \`args[3]\` → \`args[2]\` and now requiring a SHA-shaped stdout. The two test mocks (refs.test.ts, resolve-start-point.test.ts) were never updated, so 19 tests have been failing on main since that PR landed. Read args[2] and return a 40-zero SHA on a hit.
PR #3893 moved \`workspace.create\` → \`workspaces.create\` (plural) and removed both \`workspaceCreation.create\` and \`workspaceCreation.getProgress\`, which left 7 integration tests calling missing tRPC paths. - bug-hunt-3, bug-hunt-2, workspace-create-delete: rename to workspaces.create - bug-hunt-2: pin the new rollback-on-persist-failure behavior (the v1 no-rollback design that the test originally pinned no longer ships) - workspace-create-delete: read worktree path from the persisted row (new scheme is ~/.superset/worktrees/<projectId>/<branch>) and unwrap the new { workspace, terminals, agents } return shape - bug-hunt-v2: convert progress-store leak tests to test.todo — both the create() and getProgress() procedures they exercised are gone
PR #3893 removed the progress-store module these tests imported, which was crashing test/integration/workspace-creation-misc.integration.test.ts on load (Cannot find module 'progress-store').
* chore(desktop): biome auto-fix settings files Imports re-sorted and a handful of one-line wrappings cleaned up across the freshly-merged settings cleanup files. Pure formatting, no behavior change. * fix(workspace-fs): make path-types LRU cap testable to fix CI OOM The cap-eviction test created 10,200 real files in a tmpdir to verify the production cap (FILE_PATHS_MAX=10_000), which OOM-killed the CI runner (exit 137). Add a constructor-level filePathsMax override so the test can verify the same eviction logic with a tiny cap (50). Suite drops from OOM to ~1.4s. * fix(workspace-fs): widen createManager helper to accept filePathsMax * chore(workspace-fs): biome import sort * fix: cap turbo test concurrency at 2 to avoid CI OOM ubuntu-latest runners (2 vCPU / 7GB RAM) can't run all package test suites in parallel — even after the workspace-fs cap fix, the next package in the queue gets SIGKILLed for OOM. Match concurrency to vCPU. * fix(host-service): repair refs.test mocks after dropping --quiet PR #3893 dropped \`--quiet\` from \`git rev-parse --verify\`, shifting \`args[3]\` → \`args[2]\` and now requiring a SHA-shaped stdout. The two test mocks (refs.test.ts, resolve-start-point.test.ts) were never updated, so 19 tests have been failing on main since that PR landed. Read args[2] and return a 40-zero SHA on a hit. * fix(host-service): update integration tests for workspaces.create rename PR #3893 moved \`workspace.create\` → \`workspaces.create\` (plural) and removed both \`workspaceCreation.create\` and \`workspaceCreation.getProgress\`, which left 7 integration tests calling missing tRPC paths. - bug-hunt-3, bug-hunt-2, workspace-create-delete: rename to workspaces.create - bug-hunt-2: pin the new rollback-on-persist-failure behavior (the v1 no-rollback design that the test originally pinned no longer ships) - workspace-create-delete: read worktree path from the persisted row (new scheme is ~/.superset/worktrees/<projectId>/<branch>) and unwrap the new { workspace, terminals, agents } return shape - bug-hunt-v2: convert progress-store leak tests to test.todo — both the create() and getProgress() procedures they exercised are gone * fix(host-service): replace dead workspace-creation-misc tests with todos PR #3893 removed the progress-store module these tests imported, which was crashing test/integration/workspace-creation-misc.integration.test.ts on load (Cannot find module 'progress-store').
Bumps both HOST_SERVICE_VERSION and MIN_HOST_SERVICE_VERSION from 0.6.0 to 0.7.0. PR1 (#3893) added the canonical `workspaces.create` flow and `settings.hostAgentConfigs` router; 0.6.x host-services don't expose either, so adopting one in place would break v2 new-project creation and the agent-config settings UI. The coordinator's tryAdopt will SIGTERM any pre-0.7.0 service on first launch and respawn with the new bundle.
Bundles the feat(cli) work merged since v0.2.3: - agents run command + variadic flag support (#3893) Push cli-v0.2.4 after this lands to fire the release pipeline.
npm has alpha.6 as the most recent published; alpha.7 was bumped in 71bf008 but never `npm publish`-ed. Skip alpha.7 on the registry and ship the current repo state as alpha.8. Changes since alpha.6: - workspaces.create adopts the canonical host-service shape (#3893) - automations.list accepts --name filter (#3952) - automations.prompt split into automations.prompt.get / .set (#3959) - agents.list (presets demoted to UI-only configuration) (#4097) - agents.run / workspaces.create gain `superset-chat` agent + `kind` discriminator on launch results (terminal vs chat) (#4116) - type adjustments for v2 workspace render path (#4141) - redact x-api-key in debug-log header dumps (#3956, was alpha.7) After merge: `cd packages/sdk && bun run build && cd dist && npm publish --access public`.
…(PR1) (superset-sh#3893) * docs(plans): add v2 workspace create canonical refactor specs Two design docs guiding a clean reimplementation of the canonical workspace.create() flow: - 20260425-canonical-workspace-create-flow.md: umbrella design covering the unified host-service create API, host-scoped attachments, pane store registry, prompt boundary, and PR sequencing (PRs 1-7). - 20260425-host-agent-configs-pr1.md: PR 1 spec for the host-runtime agent config model (host_agent_configs table + settings router). Sourced from the prior v2-workspace-create-canonical branch so the reimplementation can land PR-by-PR per the plan. * docs(plans): switch host agent configs to argv-array launch spec Replace the single `launchCommand` string with a structured argv-array shape (`command` + `args[]` + `promptArgs[]` + `env`) matching VS Code ITerminalProfile / Tabby Shell / WezTerm SpawnCommand / Zellij panes. Empty launches drop `promptArgs` automatically, so codex/opencode/copilot no longer carry their prompt-mode flags into no-prompt sessions. Storing argv directly avoids shell-quoting bugs and makes prompt injection a list push instead of string concatenation. Adds first-class `env` overlay and the per-preset breakdown for the 8 in-scope agents. Implementation in PR1 will be redone against this shape in a separate branch; this branch is now docs-only. * docs(plans): address review on prompt contract + PR1 scope canonical-workspace-create-flow.md: - Make `prompt` required on agent launches in both code blocks (was optional in the API shape but later text said agent launches require one and promptless = raw terminal). Reconciled to the stricter rule and added a Notes line pointing readers to the Agent Configs section. 20260425-host-agent-configs-pr1.md: - Tighten the Summary to make explicit that this PR ships only the storage + tRPC router + V2 settings page. V2 modal and launch dispatch remain on legacy desktop presets and move to host configs in PR 5. * wip: canonical workspace create flow + optimistic attachment uploads Consolidate in-flight create lifecycle into one WorkspaceCreatesManager + useWorkspaceCreates hook (submit/retry/dismiss). Navigate immediately on submit; route renders creating/error/notfound states from the in-flight store. Sidebar reads the same store and supports dismiss on hover for failed entries. Optimistic attachment uploads via a module-scoped Zustand store keyed (fileId, hostUrl). Files upload to whichever host was active when added; switching hosts hides their pills without re-uploading and keeps cached attachment ids for return visits. Submit awaits in-flight uploads and joins file metadata for error messaging. Submit-time gating moved into handleSubmit so all paths (button, Enter, Cmd+Enter) respect preconditions. Plain Enter inserts a newline; submit is button or Cmd+Enter only. Server-side: workspaces.create detects an existing standard-path worktree and adopts it instead of failing on git worktree add — folds the picker's adopt path into the same submit. Layout for v2-workspace falls through to the page when no real row exists so the in-flight UI can render. Other cleanup: drop legacy in-flight-creates store + reconciler, the v2 useCreateWorkspace wrapper, useAdoptWorktree; collapse picker checkout+adopt callbacks; UploadingAttachmentPill replaces the library pill with subtle status overlays; PR link command keeps the Show closed checkbox (pagination concern); useShallow on the draft selector to fix an infinite-loop bug. * fix: drop simple-git --quiet on rev-parse and ship membership claim - refExists/localBranchExists/remoteExists: simple-git's `raw` resolves successfully with empty stdout when --quiet is on, so every "new branch" was being treated as already existing. Drop the flag and validate a 40+ hex sha was actually printed. - auth: include the user's full membership list in OAuth access-token claims so cross-org JWT checks pass downstream. - org-resource-access: split the not-found / wrong-org error paths so org-mismatch reports the actual ids in the diagnostic. - pull-requests: skip refresh for workspaces whose worktree was deleted on disk; simple-git would otherwise throw a confusing directory-does-not-exist error. - git/utils: stop logging on missing remote/origin — both null cases are expected and callers handle them. * chore(cli): override env vars from workspace .env in dev script Without -o, dotenv-cli leaves any pre-existing process.env vars alone, so a shell-level `SUPERSET_HOME_DIR=~/.superset` (set globally for the production CLI) wins over the workspace's `.env` value and dev CLI ops land in the prod data dir instead of `superset-dev-data/`. * feat(cli): add agents run command + variadic flag support - cli-framework: extend `.variadic()` to string flags so repeated `--flag value` invocations accumulate into an array. Drop the auto-`isRequired: true` from `.variadic()` — both existing positional callsites already chain `.required()` explicitly, no behavior change. - cli: new `superset agents run --workspace <id> --agent <preset|id> --prompt <text> [--attachment-id <uuid>]...` for spawning an agent inside an existing workspace. Wraps the host service `agents.run` mutation; resolves the workspace's host via the cloud lookup. * refactor: server-generate workspace name + per-side AI rename gate The renderer no longer generates a friendly-random fallback. Both `name` and `branch` are now optional on the create input — the server picks a friendly random for whichever side is missing, and the AI rename only replaces the side(s) the user didn't supply. - `workspaces.create` schema: drop `autogenerateName`, make `name` optional, allow both `branch` and `pr` to be absent (refine relaxed to "not both set"). Server generates+dedupes a friendly branch name when `branch` is undefined; falls back to PR title or branch for workspace name when `name` is undefined. - `applyAiWorkspaceRename`: take `renameTitle` / `renameBranch` flags; only apply the side the caller asked for. Both `true` from the manual `aiRename` mutation; computed per-create-input on the branch path; skipped entirely on the PR path. - Renderer: drop `friendlyFallback` from the draft store + context + `PromptGroup` placeholder; `resolveNames` returns `string | null`; `useSubmitWorkspace` sends only what the user typed (no synthetic name). In-flight states show "Creating workspace" with no subtitle when the name is absent. - plans: add the v2-workspaces-create-test-plan covering this PR's divergences from v1. * docs(plans): add comprehensive input × state matrix for workspaces.create Enumerates every input combination (PR vs branch mode, typed vs absent name/branch, taskIds, explicit id) crossed with the relevant pre-existing on-disk / DB state (idempotency, adoption, tag, remote-only ref, base-branch existence). Plus the failure/rollback paths and the post-refactor AI-rename behavior table. Drives the smoke-test plan and surfaces the regressions vs v1. * refactor(host-service): inline AI naming in workspaces.create Move AI naming (workspace title + branch) inline so it lands in the same v2Workspace.create round-trip rather than firing a post-create rename + Electric resync. Also overlap host.ensure with the git work. Restructure: - `host.ensure` kicks off at the top, awaited inside `registerCloudAndLocal` — keeps the cloud round-trip off the critical path. - AI naming kicks off in parallel when the user supplied a prompt but left at least one of (name, branch) blank. - Auto-gen branch path: `Promise.all([aiPromise, resolveNewBranchStartPoint, listBranchNames])` — branch name = AI suggestion (deduped), falling back to a friendly random when the LLM is absent or fails. No generated-then-renamed branch and no `git branch -m`. - Typed branch path: `Promise.all([planBranchSource, aiPromise])`. Existing-branch detection still uses planBranchSource; AI title rename can race with that lookup. - PR path skips AI naming entirely (PR title + derived branch are meaningful). Bench (presentations repo, p50, 5 runs each): typed branch, no prompt: 2400ms (no LLM) typed branch, with prompt: 2044ms (AI title only) auto-gen branch, with prompt: 1858ms (AI both) Inline-LLM is essentially free — the LLM (~700ms) overlaps with the ~900ms `git fetch`/`resolveStartPoint` step. v2_workspaces gets the right name from the start; no friendly-name flash. * fix(host-service): wire PR-checkout recovery into workspaces.create The recovery utility (`recoverPrCheckoutAfterGhFailure`) shipped on main via superset-sh#3725, but was wired into the old `workspaceCreation.checkout` procedure that this PR deleted. Without recovery, `workspaces.create` fails for any merged PR whose head branch has been deleted from the remote — a common case post-merge. Plug the same recovery flow into the new PR-create catch block: - fetch prMetadata.headRefOid via `gh pr view` - on `gh pr checkout` failure, try the synthetic-pr-ref fallback or FETCH_HEAD recovery (utility decides based on the error) - rollback + throw only when recovery declined; otherwise continue past the catch with the worktree on FETCH_HEAD * fix(host-service): adopt PR-pre-existing local branch when OID matches When the local repo already has the PR's head branch (typical after a prior `gh pr checkout` outside Superset), compare its OID to `prMetadata.headRefOid`: - match → adopt the existing branch into a new worktree (no `gh pr checkout`, no `--force`, no data-loss risk) - diverge → CONFLICT with a specific message naming both OIDs Removes the unconditional CONFLICT for stale-but-clean PR branches — the dominant case for engineers who review teammates' PRs locally. Diverged-branch case still blocks (user has commits the PR doesn't). * feat(db): regenerate workspace_tasks migration as 0044 after merge Main landed 0043_submitted_prompts so our original 0043_add_workspace_tasks collided. Regenerated via `drizzle-kit generate` so it slots in cleanly on top of main's 0043. * chore: drop pr-test.ts dev bench script * chore: drop unused _navigate in DashboardSidebarWorkspaceItem * feat: stitch linked-context bodies into agent prompt + workspace task M:1 - Client-side prompt builder fetches PR/GitHub-issue/internal-task bodies optimistically on link-time and stitches them into the agent prompt at submit. Bodies are cached in a module-scoped Zustand store keyed by source:id so close-and-reopen of the modal reuses fetched content. The agent now spawns whenever ANY context (typed prompt, linked PR, linked issue, linked task, attachments) is present, and the host-side schema is relaxed to allow attachment-only launches. - Split the GH content fetches off the workspace-creation router into pullRequests.getContent and a new issues.getContent (issuesRouter) so the names reflect what they are (general PR/issue body fetches) rather than what flow first needed them. - Replace the workspace_tasks join table with a nullable task_id FK column on v2_workspaces. The product is one-task-per-workspace; M:N was over-engineered. Collapses linkTask + unlinkTask into one setTask({ workspaceId, taskId | null }) mutation. Updates the renderer, host service, mcp-v2, sdk, and trpc routers accordingly. Migration 0044 regenerated as add_task_id_to_v2_workspaces. - Drop completed plan / spec docs that were carried on this branch. --------- Co-authored-by: Satya Patel <satyapatel111@gmail.com>
Recorded as integrated via -s ours after batch PRs #455-#464. Taken via individual PRs: - PR 1 (#455): v2 polish 前半 safe set (9 commits) - PR 2 (#456): v2/host-service polish 中盤 (12 commits) - PR 3 (#457): sidebar polish + jwt API (5 commits) - PR 4 (#458): host-service tRPC retry/cache/timeout (3 commits) - PR 5 (#459): v2 diff pane / file pane polish (2 commits) - PR 7 (#462): host-service v2 canonical workspace.create + attachment store (PR1 superset-sh#3893 + PR2 superset-sh#3916) - PR 11 (#463): agents API + onboarding (7 commits + 1 cleanup) - PR 12 (#464): v1→v2 import flow rewrite (11 commits + 2 follow-ups) - PR 13 (#460): host-service shell env probe + typo (2 commits) - PR 16 (#461): marketplace 19 themes (1 commit) Skipped / deferred (recorded as integrated for behind=0): - PR 6: CLI v1 launch (superset-sh#3898 + 30+ CLI/SDK followups) — defer to dedicated migration - PR 9: v2 PR3 (superset-sh#3940) + revert (superset-sh#4017) — net-zero pair - PR 10: pty-daemon (superset-sh#3896, superset-sh#3971, superset-sh#4054) — fork keeps its terminal-host - PR 14: Slack MCP-v2 (superset-sh#4197, superset-sh#4208) — depends on mcp-v2/sdk divergence - PR 15: onboarding remaining (superset-sh#4115, superset-sh#4125, superset-sh#4214, superset-sh#4213, superset-sh#4222, superset-sh#4225) — depends on fork's deleted setup pages Behind: 0 after this merge.
After upstream's superset-sh#3893 (PR 7) bumped @vscode/ripgrep 1.17.1 → 1.18.0, the platform-specific binary moved from a postinstall download to optional subpackages (e.g. @vscode/ripgrep-darwin-arm64). bun install --ignore-scripts in CI / electron-builder's traversalNodeModulesCollector wasn't bundling these into app.asar, so the runtime threw 'Could not find @vscode/ripgrep-darwin-arm64' on launch. Add all platform variants as explicit dependencies in apps/desktop/package.json so electron-builder includes them per host arch. Also stub packages/mcp-v2/package.json and packages/sdk/package.json (added by sherif --fix) — these dirs existed without package.json so workspace validation was warning.
After upstream's superset-sh#3893 (PR 7) bumped @vscode/ripgrep 1.17.1 → 1.18.0, the platform-specific binary moved from a postinstall download to optional subpackages (e.g. @vscode/ripgrep-darwin-arm64). bun install --ignore-scripts in CI / electron-builder's traversalNodeModulesCollector wasn't bundling these into app.asar, so the runtime threw 'Could not find @vscode/ripgrep-darwin-arm64' on launch. Add all platform variants as explicit dependencies in apps/desktop/package.json so electron-builder includes them per host arch. Also stub packages/mcp-v2/package.json and packages/sdk/package.json (added by sherif --fix) — these dirs existed without package.json so workspace validation was warning.
Brings back plans/20260501-linear-team-entity.md (deleted incidentally in #3893). Updated to drop the OAuth refresh workstream (already shipped via #4002), collapse the three-table teams design into a single teams entity with the counter on the row, and switch disconnect/reconnect to soft-delete so identifiers survive Linear disconnect/reconnect cycles.
…4386) * fix(integrations): prevent duplicate workspace linkage across orgs Two different Superset orgs could OAuth to the same Linear/Slack workspace, leaving the webhook handler to pick a winner non- deterministically via `findFirst` — so each webhook landed in a random org. Affected 153 orgs and 167 connection rows in prod (worst case: 13 active orgs claiming one Linear workspace). - Schema: partial unique `(provider, external_org_id) WHERE disconnected_at IS NULL` on `integration_connections`. - Migration: backfills duplicate groups by keeping the most recently updated active row per `(provider, external_org_id)` and soft-disconnecting the rest with reason `duplicate_resolved`, then adds the constraint in the same file. - OAuth callbacks (Linear, Slack): pre-check for active linkage in another org and short-circuit with `?error=workspace_already_linked &owner=<connector email>`; also catch `23505` for the race window. - Webhook + Slack event handlers: filter `isNull(disconnectedAt)` and `orderBy desc(updatedAt)` so even if a stray duplicate slips in, routing is deterministic. - Error UI: surfaces the connector's email so the user knows who to ping to disconnect; toast call deferred via `setTimeout(0)` so it fires after Sonner's `<Toaster />` subscribes on hydration. * docs(plans): restore linear teams + per-team numbering plan Brings back plans/20260501-linear-team-entity.md (deleted incidentally in #3893). Updated to drop the OAuth refresh workstream (already shipped via #4002), collapse the three-table teams design into a single teams entity with the counter on the row, and switch disconnect/reconnect to soft-delete so identifiers survive Linear disconnect/reconnect cycles. * chore(email): one-shot notifier for dup-linkage cleanup Adds the email template and a runnable script for notifying the 92 users whose Linear/Slack connections were soft-disconnected by the 0048 migration. Bullets out each affected (org, provider, workspace, new-owner-email) tuple so the recipient can see exactly what changed and reach out to the current owner if they want to take over. Script supports --dry-run, --test[=email], and --send. Requires NEXT_PUBLIC_MARKETING_URL=https://superset.sh on the command line so the logo/social icons in the email body resolve from prod rather than the localhost URL baked into root .env. * fix(integrations): address pr review nits - Slack callback UPSERT now clears disconnectedAt + disconnectReason on reconnect, matching the Linear callback. Without this, a Slack reconnect after a soft-disconnect would leave the row hidden from the new isNull(disconnectedAt) filter in event handlers and webhooks would silently no-op. - Webhook + 6 Slack lookup sites now order by (updatedAt desc, id desc) so findFirst is deterministic on equal timestamps. Belt-and-suspenders; the partial unique constraint makes ties impossible in practice. - Tag the resolver pseudo-code fence in the slug plan as `text` for markdownlint.
Summary
Two design docs guiding the v2 workspace creation refactor. (Branch reset to docs-only; implementation lives in #3914.)
plans/20260425-canonical-workspace-create-flow.md— umbrella design covering the full canonicalworkspace.create()flow: host-scoped attachment store, host-local agent config model, single host-service orchestration endpoint, pane store registry, and the renderer/automations/CLI migration. Splits into 7 PRs.plans/20260425-host-agent-configs-pr1.md— PR1 spec for the host agent config model, updated to use an argv-array launch spec (command+args[]+promptArgs[]+promptTransport+env) matching VS CodeITerminalProfile, TabbyShell, WezTermSpawnCommand, and Zellij panes. Storing argv directly avoids shell-quoting bugs and makes prompt injection a list push instead of string concatenation.Why argv-array
The original draft used a single
launchCommandstring. After surveying open-source launchers and auditing every existing v1 agent, that shape was lossy for codex/opencode/copilot's prompt-mode flags and for codex's--separator. The argv-array shape handles all 8 in-scope agents losslessly: empty launches droppromptArgsautomatically.Test plan
Summary by CodeRabbit