feat(agent): add MCP server and desktop command system#927
Conversation
📝 WalkthroughWalkthroughThis PR introduces MCP (Model Context Protocol) server support with API key authentication, device presence tracking, and agent commands infrastructure. It adds three database tables (device_presence, agent_commands, api_keys), an MCP HTTP route with tool registration, desktop UI for managing devices and API keys, and cross-platform device presence heartbeats via mobile and desktop hooks. Changes
Sequence Diagram(s)sequenceDiagram
actor Client as External Client
participant MCP as MCP Route Handler
participant Auth as Auth Service
participant Tools as Tool Registry
participant Server as MCP Server
participant DB as Database
Client->>MCP: POST /api/mcp (with X-API-Key)
MCP->>Auth: authenticateMcpRequest()
Auth->>DB: Query API key hash
DB-->>Auth: Return McpContext (userId, orgId)
Auth-->>MCP: Return McpContext
MCP->>Server: createMcpServer(ctx)
MCP->>Tools: registerMcpTools(server, ctx)
Tools->>Server: Register tool handlers
MCP->>Server: Connect transport & process request
Server->>Tools: Execute tool (e.g., create_task)
Tools->>DB: Insert/update records
DB-->>Tools: Return result
Tools-->>Server: Return response
Server-->>MCP: Return JSON-RPC response
MCP-->>Client: HTTP 200 with response
sequenceDiagram
participant Server as API Server
participant DB as Database
participant Desktop as Desktop App
participant CommandWatcher as useCommandWatcher Hook
Server->>DB: Create agent_command (pending)
DB-->>Server: Command inserted
Desktop->>DB: Query pending commands for deviceId
CommandWatcher->>DB: Watch agent_commands collection
DB-->>CommandWatcher: Stream pending command
CommandWatcher->>DB: Update status=claimed
CommandWatcher->>CommandWatcher: Validate tool params
CommandWatcher->>CommandWatcher: Execute tool
CommandWatcher->>DB: Update status=executing
CommandWatcher->>CommandWatcher: Await tool completion
CommandWatcher->>DB: Update status=completed, set result
DB-->>Server: Poll reflects completed status
Server-->>Desktop: Command processed ✓
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 11
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
apps/desktop/src/lib/trpc/routers/auth/utils/crypto-storage.ts (1)
21-58: Log machine-ID lookup failures instead of swallowing errors.The empty
catchmakes failures invisible, which will be harder to diagnose now thatgetMachineIdis used for presence tracking. Add a contextual log while still falling back. As per coding guidelines.🧾 Suggested fix
- } catch { - // Fallback if platform-specific method fails - } + } catch (err) { + console.warn( + "[auth/getMachineId] Failed to read platform machine ID; using fallback", + err, + ); + }
🤖 Fix all issues with AI agents
In @.mcp.json:
- Around line 2-8: Update the environment variable referenced for the Superset
MCP API key: inside the mcpServers -> superset configuration update the headers
entry for "X-API-Key" to use the SUPERSET_MCP_API_KEY env var (replace the
current SUPERSET_API_KEY reference) so it matches .env.example; also scan for
any other uses of SUPERSET_API_KEY and replace them if they should point to
SUPERSET_MCP_API_KEY to avoid missing header/401s.
In `@apps/api/src/lib/mcp/auth.ts`:
- Around line 12-22: authenticateMcpRequest currently calls validateApiKey
directly which can throw DB errors; wrap the validateApiKey call in a try/catch
inside authenticateMcpRequest, log the caught error (including contextual info
such as that the API key validation failed and a masked/partial apiKey value)
and return null on failure so exceptions do not bubble out; reference the
authenticateMcpRequest function and the validateApiKey call when making the
change.
In `@apps/api/src/lib/mcp/tools.ts`:
- Around line 807-855: The create_workspace tool currently accepts taskId but
doesn't forward it to the device: update the params passed to executeOnDevice in
the server.tool handler (create_workspace) to include params.taskId (or
undefined when absent) so the device command receives taskId; specifically, add
taskId: params.taskId to the object passed as params in the executeOnDevice call
so the desktop can associate the workspace with the task.
- Around line 162-255: The update_task handler allows all fields to be optional
so updateData can be empty and Drizzle will throw when calling
tx.update(...).set({}); before beginning the transaction in the update_task
server.tool, check the constructed updateData (the object populated from
params.title/description/priority/assigneeId/statusId/labels/dueDate/estimate)
and if it has no keys return a validation error (similar shape to the existing
"Task not found" error) rejecting empty updates instead of calling
dbWs.transaction; this prevents calling tx.update(...).set({}) and surfaces a
clear client error.
In `@apps/desktop/electron.vite.config.ts`:
- Around line 72-73: The Vite config is inlining secrets by using
defineEnv/define for "process.env.OPENAI_API_KEY" and
"process.env.ANTHROPIC_API_KEY"; remove these build-time replacements (delete
the defineEnv entries and any use of define for those keys in
electron.vite.config.ts) and instead load API keys at runtime (e.g., read from
OS keychain, user settings, or process.env in the Electron main process) and
pass them to the renderer via a secure IPC or contextBridge method; update any
code that expects compile-time globals to instead call the new runtime accessor
function you create in the main process (or a secure helper) so keys are never
baked into the bundle.
In `@apps/desktop/src/lib/trpc/routers/auth/index.ts`:
- Around line 93-102: getDeviceInfo is exposing raw hardware identifiers
(getMachineId and hostname) via publicProcedure; replace the raw values with a
non-reversible derived ID and restrict access. Specifically, change
getDeviceInfo to either (a) compute a one-way hash/HMAC of getMachineId combined
with an app-specific secret/salt (or a persisted per-install UUID) and return
that derived deviceId instead of the raw getMachineId value, and (b) change the
procedure from publicProcedure to an authenticated/privileged procedure (e.g.,
protectedProcedure/authenticatedProcedure) so only authorized callers can query
device presence; keep deviceName anonymized or remove it if not necessary.
Ensure the derivation uses a stable secret persisted server- or app-side so the
ID is stable but non-reversible.
In `@apps/desktop/VOICE_AGENT_PLAN.md`:
- Around line 73-101: The AudioRecorder lacks error handling, track cleanup and
a cancellation path; update the class (AudioRecorder) to store the MediaStream
on the instance (e.g., this.stream), wrap navigator.mediaDevices.getUserMedia in
try/catch inside start() and propagate or log errors, guard against null
mediaRecorder in stop(), add an explicit cancel()/dispose() method that stops
and releases all MediaStreamTracks and aborts an in-progress MediaRecorder, and
ensure onstop/onerror handlers always stop tracks and clean
this.mediaRecorder/this.chunks; keep convertToWav and blobToBase64 usage but
ensure any exceptions from them are caught in stop() and cause the promise to
reject or resolve with a failure indicator.
- Around line 265-297: The hook useVoiceCommand leaks resources and lacks
imports/error handling; fix by importing electronTrpc, wrap async operations in
startRecording and stopRecording with try/catch/finally to call store.setError
and ensure store.setProcessing is toggled correctly, store the result of
electronTrpc.voice.runAgent.subscribe in a ref (e.g., agentSubRef) and call
agentSubRef.current?.unsubscribe() when stopping and in a useEffect cleanup, and
dispose the AudioRecorder (recorder.current.dispose() or equivalent) in the same
cleanup to stop media tracks; update startRecording/stopRecording to guard
against concurrent calls (no-op if already recording) and ensure
onError/onComplete still update store state.
In `@packages/db/src/schema/agent.ts`:
- Around line 198-209: canRunTool currently defines toolRequirements inline and
uses toolRequirements[tool] ?? [] which permits unknown tools; change to a
top-level constant (e.g., TOOL_REQUIREMENTS: Record<string,string[]>) and update
canRunTool to accept an object param (e.g., canRunTool({ type, tool })) per the
style guide; in the function, look up requirements with TOOL_REQUIREMENTS[tool]
and if undefined return false (default-deny) otherwise call
getDeviceCapabilities(type) and verify every required capability is present.
Ensure you update all call sites to the new object-parameter signature.
In `@packages/trpc/src/lib/agent/cloud-agent.ts`:
- Around line 443-499: The executeUpdateTask function can call drizzle's
.update().set() with an empty updateData causing a "No values to set" error;
before starting the dbWs.transaction (in executeUpdateTask) check whether
updateData has any keys (e.g., Object.keys(updateData).length === 0) and return
a failure ToolResult (e.g., { success: false, error: "No fields to update" })
instead of proceeding; locate this logic around updateData construction and the
dbWs.transaction call in executeUpdateTask to add the guard (mirroring the
pattern used in api-keys.ts).
In `@packages/trpc/src/router/agent/agent.ts`:
- Around line 163-242: Add an authorization guard in updateCommand that ensures
only the command's owner/creator or the owning device can update it: after
fetching existingCommand (from agentCommands) grab the current principal (e.g.
ctx.session.session.userId and/or ctx.session.session.deviceId), then compare
against the command ownership fields on existingCommand (e.g.
existingCommand.createdBy, existingCommand.ownerId, or existingCommand.claimedBy
depending on your schema); if none match, throw a TRPCError with code
"FORBIDDEN". Place this check before building updateData and the db.update call
in the updateCommand mutation (use the existingCommand and ctx.session values to
locate the logic). Ensure tests or callers that expect updates use an authorized
principal.
♻️ Duplicate comments (1)
.env.example (1)
100-102: Duplicate: env var naming mismatch with .mcp.json.Please align the
.mcp.jsonheader variable withSUPERSET_MCP_API_KEY(see prior comment).
🟡 Minor comments (10)
apps/desktop/VOICE_AGENT_PLAN.md-301-312 (1)
301-312: Add language specifier to file structure code block.Similar to the architecture diagram, this fenced code block lacks a language specifier, which triggers linting warnings.
📝 Suggested fix
-``` +```text apps/desktop/src/apps/desktop/VOICE_AGENT_PLAN.md-7-45 (1)
7-45: Add language specifier to fenced code block.The architecture diagram code block is missing a language specifier. While ASCII art diagrams don't require syntax highlighting, explicitly marking it as
textor leaving it unmarked can trigger linting warnings.📝 Suggested fix
-``` +```text ┌─────────────────────────────────────────────────────────────────┐apps/desktop/VOICE_AGENT_PLAN.md-134-201 (1)
134-201: Add observable completion and verify tools format for actual implementation.The Claude Agent SDK API usage is correct (
query()withpromptandoptionsparameters). When this plan is implemented as actual code:
- Call
emit.complete()after the agent finishes streaming to properly complete the subscription and clean up resources.- Add missing imports for
transcribeAudio,createWorkspace,getAllWorkspaces, andgetActiveWorkspacein the actual implementation.- Verify tools format against the latest Claude Agent SDK documentation to ensure the
workspaceToolsobject structure matches the SDK's expected format for tool definitions.The observable's error handling is already in place; only the completion signal needs to be added.
apps/desktop/src/renderer/routes/_authenticated/settings/api-keys/components/ApiKeysSettings/ApiKeysSettings.tsx-119-123 (1)
119-123: Handle clipboard failures and extract timeout constant.
navigator.clipboard.writeText()returns a promise and must be awaited. Without error handling, the "Copied" message displays even if the operation fails. Per the coding guidelines, extract the 2000ms timeout to a named constant and log errors with context.Suggested fix
+const COPY_FEEDBACK_DURATION_MS = 2000; + const handleCopyKey = async () => { - navigator.clipboard.writeText(newKeyValue); - setCopied(true); - setTimeout(() => setCopied(false), 2000); + try { + await navigator.clipboard.writeText(newKeyValue); + setCopied(true); + setTimeout(() => setCopied(false), COPY_FEEDBACK_DURATION_MS); + } catch (error) { + console.error("[api-keys/copy] Failed to copy API key:", error); + } };packages/trpc/src/router/agent/api-keys.ts-265-269 (1)
265-269: Don’t silently ignorelastUsedAtupdate failures.The current
.catch(() => {})hides errors entirely; please log with context using the standard prefix pattern. As per coding guidelines, never swallow errors silently.🧾 Add contextual logging
db.update(apiKeys) .set({ lastUsedAt: new Date() }) .where(eq(apiKeys.id, found.id)) - .catch(() => {}); // Ignore errors + .catch((error) => { + console.warn("[agent/validateApiKey] failed to update lastUsedAt", { + error, + apiKeyId: found.id, + }); + });apps/desktop/src/renderer/hooks/useCommandWatcher/tools/navigate-to-workspace.ts-20-39 (1)
20-39: Handle ambiguous workspace name matches.
findwill pick the first match when multiple workspaces share the same name/branch, which can send the agent to the wrong workspace. Consider rejecting ambiguous matches and requiringworkspaceId.💡 Suggested fix
- const found = workspaces.find( + const matches = workspaces.filter( (ws) => ws.name.toLowerCase() === searchName || ws.branch.toLowerCase() === searchName, ); - if (!found) { + if (matches.length === 0) { return { success: false, error: `Workspace "${params.workspaceName}" not found`, }; } - targetWorkspaceId = found.id; + if (matches.length > 1) { + return { + success: false, + error: `Multiple workspaces match "${params.workspaceName}". Please specify workspaceId.`, + }; + } + targetWorkspaceId = matches[0].id;apps/mobile/hooks/useDevicePresence/useDevicePresence.ts-26-42 (1)
26-42: Don’t swallow SecureStore errors; standardize log prefix.Lines 32–41 silently swallow storage failures, and Lines 98–101 log with
"[device-presence]"instead of[domain/operation]. Add warning logs for SecureStore failures and update heartbeat logs to[device-presence/heartbeat]. As per coding guidelines, never swallow errors silently and use[domain/operation]prefixes.🔧 Suggested fix
- } catch { - // SecureStore may fail on some devices, continue to generate new ID - } + } catch (error) { + console.warn("[device-presence/storage] Failed to read device id:", error); + } ... - } catch { - // If we can't persist, we'll generate a new one each session - // This is acceptable - the device will just appear as "new" each time - } + } catch (error) { + console.warn("[device-presence/storage] Failed to persist device id:", error); + // If we can't persist, we'll generate a new one each session + // This is acceptable - the device will just appear as "new" each time + } ... - console.log("[device-presence] Heartbeat sent"); + console.log("[device-presence/heartbeat] Heartbeat sent"); ... - console.warn("[device-presence] Heartbeat failed:", error); + console.warn("[device-presence/heartbeat] Heartbeat failed:", error);Also applies to: 93-101
apps/api/src/app/api/mcp/route.ts-133-143 (1)
133-143: Align CORS Allow-Methods with actual support.Line 138 advertises GET/DELETE even though handlers return 405. Consider limiting to
POST, OPTIONSto avoid misleading preflight results.✏️ Suggested tweak
- "Access-Control-Allow-Methods": "GET, POST, DELETE, OPTIONS", + "Access-Control-Allow-Methods": "POST, OPTIONS",apps/api/MCP_TOOLS.md-226-246 (1)
226-246: Aligncreate_worktreedocs with implementation naming/schema.Docs describe
create_worktree(and includetaskId), but the desktop tool exportscreate_workspaceand doesn’t accepttaskIdtoday (apps/desktop/src/renderer/hooks/useCommandWatcher/tools/create-worktree.ts, Lines 62-64). Please align the docs or implementation to avoid agents calling the wrong tool or passing unsupported fields.✏️ Possible doc alignment
-#### `create_worktree` +#### `create_workspace` ... -const createWorktreeInput = z.object({ +const createWorkspaceInput = z.object({ ... -const createWorktreeOutput = z.object({ +const createWorkspaceOutput = z.object({apps/api/src/app/api/mcp/route.ts-56-59 (1)
56-59: Add CORS headers to POST response and error responses.Browser CORS spec requires
Access-Control-Allow-OriginandAccess-Control-Expose-Headerson the actual response, not just the OPTIONS preflight. The response fromtransport.handleRequest()(line 57) and the error response (lines 63–78) both need CORS headers to match the OPTIONS handler configuration.🔧 Fix for POST response
const response = await transport.handleRequest(request); - - return response; + const headers = new Headers(response.headers); + headers.set("Access-Control-Allow-Origin", "*"); + headers.set( + "Access-Control-Expose-Headers", + "mcp-session-id, mcp-protocol-version", + ); + + return new Response(response.body, { + status: response.status, + statusText: response.statusText, + headers, + });Also wrap the error response (lines 63–78) with the same CORS headers.
🧹 Nitpick comments (26)
apps/mobile/screens/(authenticated)/index/HomeScreen.tsx (3)
19-28: Consider importing the type from the API router and fixing thelastSeenAttype.The
OnlineDeviceinterface is defined locally, but should ideally be inferred from the tRPC router to stay in sync with the API. Additionally,lastSeenAtis typed asDate, but tRPC/JSON serialization returns ISO strings. TheformatLastSeenfunction re-wraps it withnew Date(date), confirming the runtime value is likely a string.Suggested approach
-interface OnlineDevice { - id: string; - deviceId: string; - deviceName: string; - deviceType: "desktop" | "mobile" | "web"; - lastSeenAt: Date; - ownerId: string; - ownerName: string; - ownerEmail: string; -} +import type { RouterOutput } from "@/lib/trpc/client"; + +type OnlineDevice = RouterOutput["agent"]["listOnlineDevices"][number];If the router output type isn't exported, at minimum fix the local type:
- lastSeenAt: Date; + lastSeenAt: Date | string;
49-53: Extract the polling interval to a named constant.Per coding guidelines, magic numbers should be extracted to named constants at module top for clarity and maintainability.
Suggested fix
Add at the top of the file (after imports):
const DEVICE_REFRESH_INTERVAL_MS = 10_000;Then update the usage:
- const interval = setInterval(fetchDevices, 10_000); + const interval = setInterval(fetchDevices, DEVICE_REFRESH_INTERVAL_MS);
55-61: Move pure helper function outside the component.
formatLastSeenis a pure function with no dependencies on component state or props. Moving it outside the component avoids recreating it on every render.Suggested fix
Move above the component definition:
const formatLastSeen = (date: Date | string) => { const seconds = Math.floor((Date.now() - new Date(date).getTime()) / 1000); if (seconds < 60) return `${seconds}s ago`; const minutes = Math.floor(seconds / 60); if (minutes < 60) return `${minutes}m ago`; return new Date(date).toLocaleTimeString(); }; export function HomeScreen() { // ... }apps/desktop/VOICE_AGENT_PLAN.md (3)
108-127: Include action methods in the interface for better type safety.The
VoiceStateinterface only defines state properties but omits the action methods (startRecording,stopRecording, etc.). This reduces type safety and IDE autocomplete support.♻️ Proposed refactor
interface VoiceState { + // State isRecording: boolean; isProcessing: boolean; messages: AgentMessage[]; error: string | null; + + // Actions + startRecording: () => void; + stopRecording: () => void; + setProcessing: (v: boolean) => void; + addMessage: (msg: AgentMessage) => void; + setError: (err: string) => void; } export const useVoiceStore = create<VoiceState>((set) => ({ isRecording: false, isProcessing: false, messages: [], error: null, startRecording: () => set({ isRecording: true, messages: [], error: null }), stopRecording: () => set({ isRecording: false }), setProcessing: (v: boolean) => set({ isProcessing: v }), addMessage: (msg: AgentMessage) => set((s) => ({ messages: [...s.messages, msg] })), setError: (err: string) => set({ error: err, isProcessing: false }), }));
232-258: Consider displaying error state to the user.The
VoiceOverlaycomponent accesses theerrorstate from the store but doesn't display it. Users should be notified when voice commands fail.💡 Suggested enhancement
export function VoiceOverlay() { - const { isRecording, isProcessing, messages } = useVoiceStore(); + const { isRecording, isProcessing, messages, error } = useVoiceStore(); - if (!isRecording && !isProcessing && messages.length === 0) return null; + if (!isRecording && !isProcessing && messages.length === 0 && !error) return null; return ( <div className="fixed bottom-4 left-1/2 -translate-x-1/2 z-50 bg-background/95 backdrop-blur border rounded-lg p-4 min-w-80"> + {error && ( + <div className="flex items-center gap-2 text-red-500 mb-2"> + <AlertCircle className="w-4 h-4" /> + {error} + </div> + )} + {isRecording && (
322-328: Consider adding testing and validation steps.The implementation order is logical, but it would be helpful to include testing and validation steps, such as:
- Testing microphone permissions and error handling
- Validating WisprFlow integration
- Testing Claude SDK tool execution
- End-to-end testing with the UI
📋 Suggested additions
## Implementation Order 1. Add hotkey to `hotkeys.ts` 2. Create audio recorder 3. Create voice store 4. Create tRPC router with tools (main process) 5. Create WisprFlow client 6. Create UI overlay 7. Wire up with `useVoiceCommand` hook +8. Test microphone permissions and error handling +9. Validate WisprFlow API integration +10. Test Claude SDK tool execution +11. End-to-end testing with UIapps/desktop/src/renderer/routes/_authenticated/providers/CollectionsProvider/collections.ts (1)
214-242: Consider reducing debug logging verbosity for production.The
onUpdatehandler has extensive debug logging which is helpful during development but may be noisy in production. Consider either:
- Using a debug flag/log level to conditionally log
- Consolidating to fewer, more meaningful log statements
The logging does follow the
[domain/operation]prefix pattern per guidelines, so this is a minor polish item.♻️ Example: Consolidated logging
onUpdate: async ({ transaction }) => { const { original, changes } = transaction.mutations[0]; - console.log("[agentCommands.onUpdate] Triggered:", { - commandId: original.id, - changes, - }); // Only send update if we have a status change (required field) if (!changes.status) { - console.log("[agentCommands.onUpdate] No status change, skipping"); return { txid: Date.now() }; } try { - console.log("[agentCommands.onUpdate] Sending update to server..."); const result = await apiClient.agent.updateCommand.mutate({ id: original.id, status: changes.status, claimedBy: changes.claimedBy ?? undefined, claimedAt: changes.claimedAt ?? undefined, result: changes.result ?? undefined, error: changes.error ?? undefined, executedAt: changes.executedAt ?? undefined, }); - console.log("[agentCommands.onUpdate] Server updated, txid:", result.txid); + console.log(`[agentCommands.onUpdate] ${original.id} → ${changes.status}`); return { txid: Number(result.txid) }; } catch (error) { console.error("[agentCommands.onUpdate] Failed to sync:", error); throw error; } },apps/desktop/src/renderer/routes/_authenticated/settings/devices/components/DevicesSettings/DevicesSettings.tsx (2)
27-37: Add contextual logging for fetch failures.
Errors surface in UI, but there’s no log for troubleshooting/telemetry.♻️ Suggested tweak
- } catch (err) { - setError(err instanceof Error ? err.message : "Failed to fetch devices"); + } catch (err) { + console.error("[devices/fetch] Failed to load online devices:", err); + setError(err instanceof Error ? err.message : "Failed to fetch devices"); } finally {As per coding guidelines, please log errors with context.
40-45: Avoid overlapping polls and extract the refresh interval constant.
fetchDevicesis async, so a slow request can overlap the next interval tick. Consider guarding against overlaps and hoist the10_000value to a named constant.As per coding guidelines, please extract magic numbers to named constants.
packages/db/src/schema/agent.ts (1)
103-106: ConstraintargetDeviceTypeat the schema level.Right now this column is plain text, so any value can land in the DB. Since routing expects a known set (desktop/mobile/web/cloud), using an enum keeps integrity and typing aligned. Remember to update the SQL migration to add the enum and alter the column type.
♻️ Suggested enum constraint
+export const targetDeviceTypeValues = [...deviceTypeValues, "cloud"] as const; +export type TargetDeviceType = (typeof targetDeviceTypeValues)[number]; +export const targetDeviceTypeEnum = pgEnum( + "target_device_type", + targetDeviceTypeValues, +); // Targeting targetDeviceId: text("target_device_id"), // null = cloud execution -targetDeviceType: text("target_device_type"), // "desktop" | "mobile" | "web" | "cloud" +targetDeviceType: targetDeviceTypeEnum("target_device_type"),packages/db/drizzle/0013_add_api_keys.sql (1)
1-19: Makekey_hashunique to prevent duplicate API keys.A unique index guarantees a single API key maps to one row, avoiding ambiguous validation. Please verify no existing duplicates before applying, and mirror this in the Drizzle schema with
uniqueIndex.🔒 Suggested unique index
-CREATE INDEX "api_keys_key_hash_idx" ON "api_keys" USING btree ("key_hash"); +CREATE UNIQUE INDEX "api_keys_key_hash_idx" ON "api_keys" USING btree ("key_hash");packages/db/drizzle/0012_add_agent_tables.sql (3)
7-8: Consider usingdevice_typeenum fortarget_device_typecolumn.Line 8 defines
target_device_typeastext, but you've already created adevice_typeenum on line 2. However, looking at the ORM schema snippet,target_device_typecan also be"cloud"which isn't in the enum. If"cloud"is intentional, consider either:
- Adding
'cloud'to thedevice_typeenum, or- Documenting why this column uses
textinstead of the enumThis inconsistency could lead to invalid values being stored.
12-12: Consider adding an index onparent_command_idfor chained command queries.The
parent_command_idcolumn enables chained commands but lacks an index. Queries that traverse command chains or look up child commands will perform full table scans.🔧 Suggested index addition
CREATE INDEX "agent_commands_parent_cmd_idx" ON "agent_commands" USING btree ("parent_command_id");
43-43: Unrelated schema change bundled in agent tables migration.Adding the
namecolumn toauth.invitationsappears unrelated to the agent system. Consider separating this into its own migration for clearer history and easier rollbacks.apps/desktop/src/renderer/hooks/useCommandWatcher/tools/list-workspaces.ts (2)
6-9: Consider switching execute to a single params object.Current
(params, ctx)conflicts with the “2+ params → object params” rule; if feasible, updateToolDefinitionand call sites to accept{ params, ctx }. As per coding guidelines, ...
16-19: Drop theas unknown as Record<string, unknown>[]cast.
data: { workspaces }should already satisfyCommandResultand preserves type safety.♻️ Proposed cleanup
return { success: true, - data: { workspaces: workspaces as unknown as Record<string, unknown>[] }, + data: { workspaces }, };apps/desktop/src/renderer/hooks/useCommandWatcher/tools/get-app-context.ts (1)
14-16: Consider adding undefined guard for type safety.While
workspaceMatch[1]will always exist whenworkspaceMatchis truthy due to the regex pattern, TypeScript strict mode may flag this. Consider using optional chaining for explicit safety.♻️ Suggested improvement
// Parse workspace ID from route if present (route is /workspace/$workspaceId) const workspaceMatch = pathname.match(/\/workspace\/([^/]+)/); - const currentWorkspaceId = workspaceMatch ? workspaceMatch[1] : null; + const currentWorkspaceId = workspaceMatch?.[1] ?? null;apps/desktop/src/renderer/hooks/useCommandWatcher/tools/index.ts (2)
1-21: Avoidanyin the tool registry typing.
You can keep schema flexibility withoutanyby usingz.ZodTypeAny. As per coding guidelines, avoidanywhere possible.♻️ Proposed change
+import { z } from "zod"; import type { CommandResult, ToolContext, ToolDefinition } from "./types"; // Registry of all available tools -// biome-ignore lint/suspicious/noExplicitAny: Tool schemas vary -const tools: ToolDefinition<any>[] = [ +const tools: ToolDefinition<z.ZodTypeAny>[] = [ createWorkspace, deleteWorkspace, getAppContext, listProjects, listWorkspaces, navigateToWorkspace, switchWorkspace, ];
30-34: Prefer a params object forexecuteToolto match repo style.
This aligns with the “object params for 2+ args” guideline and reduces call-site ambiguity. As per coding guidelines, prefer a single params object.♻️ Proposed change
-export async function executeTool( - name: string, - params: Record<string, unknown> | null, - ctx: ToolContext, -): Promise<CommandResult> { +export async function executeTool({ + name, + params, + ctx, +}: { + name: string; + params: Record<string, unknown> | null; + ctx: ToolContext; +}): Promise<CommandResult> {Remember to update call sites (e.g.,
useCommandWatcher) accordingly.apps/desktop/src/renderer/hooks/useCommandWatcher/useCommandWatcher.ts (2)
67-148: Use an object param forprocessCommand.
This keeps call sites self-documenting and aligns with the repo’s params-object rule. As per coding guidelines, prefer object parameters for functions with 2+ args.♻️ Proposed change
- const processCommand = useCallback( - async (commandId: string, tool: string, params: Record<string, unknown> | null) => { + const processCommand = useCallback( + async ({ + commandId, + tool, + params, + }: { + commandId: string; + tool: string; + params: Record<string, unknown> | null; + }) => { if (processingCommands.has(commandId)) return; @@ - for (const cmd of commandsForThisDevice) { - processCommand(cmd.id, cmd.tool, cmd.params); - } + for (const cmd of commandsForThisDevice) { + processCommand({ commandId: cmd.id, tool: cmd.tool, params: cmd.params }); + } }, [shouldWatch, deviceInfo?.deviceId, organizationId, pendingCommands, processCommand, collections.agentCommands]);
81-90: Extract the 100ms delays into a named constant.
This improves intent and makes tuning easier. As per coding guidelines, avoid magic numbers.♻️ Proposed change
const processingCommands = new Set<string>(); +const COMMAND_STATUS_DELAY_MS = 100; @@ - await new Promise((resolve) => setTimeout(resolve, 100)); + await new Promise((resolve) => setTimeout(resolve, COMMAND_STATUS_DELAY_MS)); @@ - await new Promise((resolve) => setTimeout(resolve, 100)); + await new Promise((resolve) => setTimeout(resolve, COMMAND_STATUS_DELAY_MS));apps/api/src/app/api/mcp/route.ts (1)
61-61: Use [domain/operation] log prefix.Line 61 uses
"[mcp]"without an operation; consider"[mcp/route]"or"[mcp/request]"for consistency. As per coding guidelines, prefer[domain/operation]prefixes.✏️ Suggested tweak
- console.error("[mcp] Error handling request:", error); + console.error("[mcp/route] Error handling request:", error);apps/desktop/src/renderer/hooks/useCommandWatcher/tools/create-worktree.ts (1)
10-13: Prefer a single params object forexecute().
execute(params, ctx)uses positional args; the TS guideline prefers a single params object for functions with 2+ parameters. Consider updatingToolDefinition(and call sites) toexecute({ params, ctx })for self-documenting calls. As per coding guidelines, use object parameters for 2+ args.apps/desktop/src/renderer/hooks/useCommandWatcher/tools/types.ts (1)
27-31: Prefer a single params object for tool execution.This avoids positional mixups and aligns with the repo’s function-parameter convention. As per coding guidelines, prefer a single params object for functions with 2+ parameters.
♻️ Proposed shape update
-export interface ToolDefinition<T extends z.ZodType> { - name: string; - schema: T; - execute: (params: z.infer<T>, ctx: ToolContext) => Promise<CommandResult>; -} +export interface ToolDefinition<T extends z.ZodType> { + name: string; + schema: T; + execute: (args: { params: z.infer<T>; ctx: ToolContext }) => Promise<CommandResult>; +}packages/trpc/src/lib/agent/cloud-agent.ts (1)
543-640: Prefer a single options object for executeOnDevice.This reduces positional mistakes and matches the repo convention. As per coding guidelines, prefer a single params object for functions with 2+ parameters.
♻️ Proposed signature shift
-async function executeOnDevice( - ctx: AgentContext, - deviceId: string, - tool: string, - params: Record<string, unknown>, -): Promise<ToolResult> { +async function executeOnDevice({ + ctx, + deviceId, + tool, + params, +}: { + ctx: AgentContext; + deviceId: string; + tool: string; + params: Record<string, unknown>; +}): Promise<ToolResult> {- return executeOnDevice(ctx, autoDeviceId, name, toolParams); + return executeOnDevice({ ctx, deviceId: autoDeviceId, tool: name, params: toolParams });apps/api/src/lib/mcp/tools.ts (1)
22-23: Prefer a single options object forregisterMcpTools.This aligns with the codebase parameter convention and makes future expansion safer. As per coding guidelines, prefer a single params object for functions with 2+ parameters.
♻️ Proposed signature
-export function registerMcpTools(server: McpServer, ctx: McpContext) { +export function registerMcpTools({ + server, + ctx, +}: { + server: McpServer; + ctx: McpContext; +}) {
| "mcpServers": { | ||
| "superset": { | ||
| "type": "http", | ||
| "url": "http://localhost:3001/api/mcp", | ||
| "headers": { | ||
| "X-API-Key": "${SUPERSET_API_KEY}" | ||
| } |
There was a problem hiding this comment.
Align MCP API key env var name with .env.example.
.mcp.json references SUPERSET_API_KEY, but .env.example defines SUPERSET_MCP_API_KEY. This mismatch can lead to a missing X-API-Key header and 401s during local MCP calls.
🔧 Suggested fix
- "X-API-Key": "${SUPERSET_API_KEY}"
+ "X-API-Key": "${SUPERSET_MCP_API_KEY}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "mcpServers": { | |
| "superset": { | |
| "type": "http", | |
| "url": "http://localhost:3001/api/mcp", | |
| "headers": { | |
| "X-API-Key": "${SUPERSET_API_KEY}" | |
| } | |
| "mcpServers": { | |
| "superset": { | |
| "type": "http", | |
| "url": "http://localhost:3001/api/mcp", | |
| "headers": { | |
| "X-API-Key": "${SUPERSET_MCP_API_KEY}" | |
| } |
🤖 Prompt for AI Agents
In @.mcp.json around lines 2 - 8, Update the environment variable referenced for
the Superset MCP API key: inside the mcpServers -> superset configuration update
the headers entry for "X-API-Key" to use the SUPERSET_MCP_API_KEY env var
(replace the current SUPERSET_API_KEY reference) so it matches .env.example;
also scan for any other uses of SUPERSET_API_KEY and replace them if they should
point to SUPERSET_MCP_API_KEY to avoid missing header/401s.
| export async function authenticateMcpRequest( | ||
| request: Request, | ||
| ): Promise<McpContext | null> { | ||
| const apiKey = request.headers.get("X-API-Key"); | ||
|
|
||
| if (!apiKey) { | ||
| return null; | ||
| } | ||
|
|
||
| const context = await validateApiKey(apiKey); | ||
| return context; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's read the actual auth.ts file to see the current implementation
cat -n apps/api/src/lib/mcp/auth.tsRepository: superset-sh/superset
Length of output: 1229
🏁 Script executed:
# Find where validateApiKey is defined
rg -n "validateApiKey" apps/api/src --type ts -A 5 -B 2Repository: superset-sh/superset
Length of output: 788
🏁 Script executed:
# Check where authenticateMcpRequest is called from
rg -n "authenticateMcpRequest" apps/api/src --type ts -B 2 -A 2Repository: superset-sh/superset
Length of output: 1123
🏁 Script executed:
# Check the structure of the mcp directory to understand context
find apps/api/src/lib/mcp -type f -name "*.ts" | head -20Repository: superset-sh/superset
Length of output: 122
🏁 Script executed:
# Find validateApiKey implementation - it's imported from `@superset/trpc`
rg -n "export.*validateApiKey" --type ts -A 10Repository: superset-sh/superset
Length of output: 1014
🏁 Script executed:
# Look for packages/trpc to find the validateApiKey implementation
find . -path "*trpc*" -name "*.ts" -type f | grep -v node_modules | head -20Repository: superset-sh/superset
Length of output: 1181
🏁 Script executed:
# Read the full validateApiKey implementation
sed -n '227,280p' packages/trpc/src/router/agent/api-keys.ts | cat -nRepository: superset-sh/superset
Length of output: 1457
🏁 Script executed:
# Check if there's any error throwing in validateApiKey and surrounding context
sed -n '225,260p' packages/trpc/src/router/agent/api-keys.ts | cat -nRepository: superset-sh/superset
Length of output: 1022
🏁 Script executed:
# Check the route handler to understand full context
cat -n apps/api/src/app/api/mcp/route.tsRepository: superset-sh/superset
Length of output: 4353
🏁 Script executed:
# Verify the exact handling in the route
sed -n '38,60p' apps/api/src/app/api/mcp/route.ts | cat -nRepository: superset-sh/superset
Length of output: 887
Wrap API key validation to prevent unhandled exceptions from bubbling up with no logging.
validateApiKey performs database queries that can fail. If an exception occurs (connection error, query failure, etc.), it will propagate uncaught past the outer try/catch in the route handler (which starts after the call at line 40) and result in a 500 error with no context. Add error handling with logging:
Suggested fix
export async function authenticateMcpRequest(
request: Request,
): Promise<McpContext | null> {
const apiKey = request.headers.get("X-API-Key");
if (!apiKey) {
return null;
}
- const context = await validateApiKey(apiKey);
- return context;
+ try {
+ return await validateApiKey(apiKey);
+ } catch (error) {
+ console.error("[mcp/auth] API key validation failed:", error);
+ return null;
+ }
}🤖 Prompt for AI Agents
In `@apps/api/src/lib/mcp/auth.ts` around lines 12 - 22, authenticateMcpRequest
currently calls validateApiKey directly which can throw DB errors; wrap the
validateApiKey call in a try/catch inside authenticateMcpRequest, log the caught
error (including contextual info such as that the API key validation failed and
a masked/partial apiKey value) and return null on failure so exceptions do not
bubble out; reference the authenticateMcpRequest function and the validateApiKey
call when making the change.
| server.tool( | ||
| "update_task", | ||
| "Update an existing task", | ||
| { | ||
| taskId: z.string().describe("Task ID (uuid) or slug"), | ||
| title: z.string().min(1).optional().describe("New title"), | ||
| description: z.string().optional().describe("New description"), | ||
| priority: z.enum(["urgent", "high", "medium", "low", "none"]).optional(), | ||
| assigneeId: z | ||
| .string() | ||
| .uuid() | ||
| .nullable() | ||
| .optional() | ||
| .describe("New assignee (null to unassign)"), | ||
| statusId: z.string().uuid().optional().describe("New status ID"), | ||
| labels: z.array(z.string()).optional().describe("Replace labels"), | ||
| dueDate: z | ||
| .string() | ||
| .datetime() | ||
| .nullable() | ||
| .optional() | ||
| .describe("New due date (null to clear)"), | ||
| estimate: z.number().int().positive().nullable().optional(), | ||
| }, | ||
| async (params) => { | ||
| // Find task by ID or slug | ||
| const isUuid = | ||
| /^[0-9a-f]{8}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{4}-[0-9a-f]{12}$/i.test( | ||
| params.taskId, | ||
| ); | ||
| const [existingTask] = await db | ||
| .select() | ||
| .from(tasks) | ||
| .where( | ||
| and( | ||
| isUuid | ||
| ? eq(tasks.id, params.taskId) | ||
| : eq(tasks.slug, params.taskId), | ||
| eq(tasks.organizationId, ctx.organizationId), | ||
| isNull(tasks.deletedAt), | ||
| ), | ||
| ) | ||
| .limit(1); | ||
|
|
||
| if (!existingTask) { | ||
| return { | ||
| content: [{ type: "text", text: "Error: Task not found" }], | ||
| isError: true, | ||
| }; | ||
| } | ||
|
|
||
| const updateData: Record<string, unknown> = {}; | ||
| if (params.title !== undefined) updateData.title = params.title; | ||
| if (params.description !== undefined) | ||
| updateData.description = params.description; | ||
| if (params.priority !== undefined) updateData.priority = params.priority; | ||
| if (params.assigneeId !== undefined) | ||
| updateData.assigneeId = params.assigneeId; | ||
| if (params.statusId !== undefined) updateData.statusId = params.statusId; | ||
| if (params.labels !== undefined) updateData.labels = params.labels; | ||
| if (params.dueDate !== undefined) | ||
| updateData.dueDate = params.dueDate ? new Date(params.dueDate) : null; | ||
| if (params.estimate !== undefined) updateData.estimate = params.estimate; | ||
|
|
||
| const result = await dbWs.transaction(async (tx) => { | ||
| const [task] = await tx | ||
| .update(tasks) | ||
| .set(updateData) | ||
| .where(eq(tasks.id, existingTask.id)) | ||
| .returning(); | ||
|
|
||
| const txid = await getCurrentTxid(tx); | ||
| return { task, txid }; | ||
| }); | ||
|
|
||
| return { | ||
| content: [ | ||
| { | ||
| type: "text", | ||
| text: JSON.stringify( | ||
| { | ||
| id: result.task?.id, | ||
| slug: result.task?.slug, | ||
| title: result.task?.title, | ||
| txid: result.txid, | ||
| }, | ||
| null, | ||
| 2, | ||
| ), | ||
| }, | ||
| ], | ||
| }; | ||
| }, | ||
| ); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find similar update patterns in the codebase
fd -e ts -e tsx | xargs rg "\.update\(" -A 5 | head -100Repository: superset-sh/superset
Length of output: 10097
🏁 Script executed:
# Search for any validation of empty objects before database operations
rg "Object\.keys.*length|updateData|Object\.keys" apps/api/src/lib/mcp/ -A 2 -B 2Repository: superset-sh/superset
Length of output: 1599
🏁 Script executed:
# Check if there are similar guard patterns elsewhere in the codebase
rg "No fields provided|no.*update|empty.*update" apps/api/src -iRepository: superset-sh/superset
Length of output: 309
🏁 Script executed:
# Check the Drizzle ORM package to understand its behavior with empty sets
cat package.json | grep -i drizzleRepository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Read the tools.ts file to understand the full context
head -300 apps/api/src/lib/mcp/tools.ts | tail -150Repository: superset-sh/superset
Length of output: 4155
🏁 Script executed:
# Search for Drizzle documentation or any error handling related to empty updates
rg "drizzle" -i apps/api/package.json package.jsonRepository: superset-sh/superset
Length of output: 112
🏁 Script executed:
# Check if there are any tests or documentation about this in the repo
fd -e md -e test.ts -e spec.ts | xargs rg "drizzle.*update|empty.*set" -i 2>/dev/null | head -20Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Look for any existing validation before Drizzle updates in the entire apps/api
rg "Object\.keys.*\.length|if.*Object\.keys" apps/api/src --type ts --type tsxRepository: superset-sh/superset
Length of output: 91
🏁 Script executed:
# Search for Drizzle update patterns with empty sets or validation
rg "\.set\(\)" apps/api/src -A 2 -B 2Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Check if there are any error logs or issues related to empty updates
rg "empty.*update|no.*field|update.*fail" apps/api/src -i --type tsRepository: superset-sh/superset
Length of output: 220
🏁 Script executed:
# Let me check the actual Drizzle version and search for known behavior
python3 << 'EOF'
import subprocess
import json
# Get drizzle-orm version from package.json
result = subprocess.run(['cat', 'apps/api/package.json'], capture_output=True, text=True)
content = json.loads(result.stdout)
drizzle_version = content.get('dependencies', {}).get('drizzle-orm', 'unknown')
print(f"Drizzle ORM version: {drizzle_version}")
# Based on Drizzle 0.45.1 behavior (released around late 2024/early 2025)
# Drizzle should handle empty .set() by throwing an error or creating invalid SQL
# Let's search for any error handling around updates
EOFRepository: superset-sh/superset
Length of output: 150
🏁 Script executed:
# Search for transaction patterns in the codebase to see if there's error handling
rg "dbWs\.transaction|\.transaction\(" apps/api/src -A 10 | head -50Repository: superset-sh/superset
Length of output: 1931
🏁 Script executed:
# Examine the schema definition more carefully - all params are optional
cat apps/api/src/lib/mcp/tools.ts | sed -n '162,220p'Repository: superset-sh/superset
Length of output: 1896
🏁 Script executed:
# Check Drizzle ORM 0.45.1 documentation behavior or look for any error handling in the current code
rg "set\(\s*\)" apps/api/src -B 3 -A 3Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
# Look for any try-catch or error handling around the update operations
rg "catch|throw|error" apps/api/src/lib/mcp/tools.ts -B 2 -A 2 | head -40Repository: superset-sh/superset
Length of output: 164
Add validation to reject empty updates.
All update fields are optional, allowing a request with only taskId to reach .set({}) which Drizzle rejects with a database error. Return a validation error when no fields are provided.
Suggested guard
const updateData: Record<string, unknown> = {};
if (params.title !== undefined) updateData.title = params.title;
if (params.description !== undefined)
updateData.description = params.description;
if (params.priority !== undefined) updateData.priority = params.priority;
if (params.assigneeId !== undefined)
updateData.assigneeId = params.assigneeId;
if (params.statusId !== undefined) updateData.statusId = params.statusId;
if (params.labels !== undefined) updateData.labels = params.labels;
if (params.dueDate !== undefined)
updateData.dueDate = params.dueDate ? new Date(params.dueDate) : null;
if (params.estimate !== undefined) updateData.estimate = params.estimate;
+
+ if (Object.keys(updateData).length === 0) {
+ return {
+ content: [{ type: "text", text: "Error: No fields provided to update" }],
+ isError: true,
+ };
+ }🤖 Prompt for AI Agents
In `@apps/api/src/lib/mcp/tools.ts` around lines 162 - 255, The update_task
handler allows all fields to be optional so updateData can be empty and Drizzle
will throw when calling tx.update(...).set({}); before beginning the transaction
in the update_task server.tool, check the constructed updateData (the object
populated from
params.title/description/priority/assigneeId/statusId/labels/dueDate/estimate)
and if it has no keys return a validation error (similar shape to the existing
"Task not found" error) rejecting empty updates instead of calling
dbWs.transaction; this prevents calling tx.update(...).set({}) and surfaces a
clear client error.
| server.tool( | ||
| "create_workspace", | ||
| "Create a new git worktree workspace", | ||
| { | ||
| deviceId: z.string().optional(), | ||
| name: z | ||
| .string() | ||
| .optional() | ||
| .describe("Workspace name (auto-generated if not provided)"), | ||
| branchName: z | ||
| .string() | ||
| .optional() | ||
| .describe("Branch name (auto-generated if not provided)"), | ||
| baseBranch: z | ||
| .string() | ||
| .optional() | ||
| .describe("Branch to create from (defaults to main)"), | ||
| taskId: z | ||
| .string() | ||
| .optional() | ||
| .describe("Task ID to associate with workspace"), | ||
| }, | ||
| async (params) => { | ||
| const targetDeviceId = params.deviceId ?? ctx.defaultDeviceId; | ||
|
|
||
| if (!targetDeviceId) { | ||
| return { | ||
| content: [ | ||
| { | ||
| type: "text", | ||
| text: "Error: No device specified and no default device configured", | ||
| }, | ||
| ], | ||
| isError: true, | ||
| }; | ||
| } | ||
|
|
||
| return executeOnDevice({ | ||
| ctx, | ||
| deviceId: targetDeviceId, | ||
| tool: "create_workspace", | ||
| params: { | ||
| name: params.name, | ||
| branchName: params.branchName, | ||
| baseBranch: params.baseBranch, | ||
| }, | ||
| }); | ||
| }, | ||
| ); |
There was a problem hiding this comment.
Forward taskId to the device command.
taskId is accepted by the tool schema but dropped before dispatch, so the desktop side can’t associate the workspace with a task.
🐛 Suggested fix
return executeOnDevice({
ctx,
deviceId: targetDeviceId,
tool: "create_workspace",
params: {
name: params.name,
branchName: params.branchName,
baseBranch: params.baseBranch,
+ taskId: params.taskId,
},
});📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| server.tool( | |
| "create_workspace", | |
| "Create a new git worktree workspace", | |
| { | |
| deviceId: z.string().optional(), | |
| name: z | |
| .string() | |
| .optional() | |
| .describe("Workspace name (auto-generated if not provided)"), | |
| branchName: z | |
| .string() | |
| .optional() | |
| .describe("Branch name (auto-generated if not provided)"), | |
| baseBranch: z | |
| .string() | |
| .optional() | |
| .describe("Branch to create from (defaults to main)"), | |
| taskId: z | |
| .string() | |
| .optional() | |
| .describe("Task ID to associate with workspace"), | |
| }, | |
| async (params) => { | |
| const targetDeviceId = params.deviceId ?? ctx.defaultDeviceId; | |
| if (!targetDeviceId) { | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: "Error: No device specified and no default device configured", | |
| }, | |
| ], | |
| isError: true, | |
| }; | |
| } | |
| return executeOnDevice({ | |
| ctx, | |
| deviceId: targetDeviceId, | |
| tool: "create_workspace", | |
| params: { | |
| name: params.name, | |
| branchName: params.branchName, | |
| baseBranch: params.baseBranch, | |
| }, | |
| }); | |
| }, | |
| ); | |
| server.tool( | |
| "create_workspace", | |
| "Create a new git worktree workspace", | |
| { | |
| deviceId: z.string().optional(), | |
| name: z | |
| .string() | |
| .optional() | |
| .describe("Workspace name (auto-generated if not provided)"), | |
| branchName: z | |
| .string() | |
| .optional() | |
| .describe("Branch name (auto-generated if not provided)"), | |
| baseBranch: z | |
| .string() | |
| .optional() | |
| .describe("Branch to create from (defaults to main)"), | |
| taskId: z | |
| .string() | |
| .optional() | |
| .describe("Task ID to associate with workspace"), | |
| }, | |
| async (params) => { | |
| const targetDeviceId = params.deviceId ?? ctx.defaultDeviceId; | |
| if (!targetDeviceId) { | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: "Error: No device specified and no default device configured", | |
| }, | |
| ], | |
| isError: true, | |
| }; | |
| } | |
| return executeOnDevice({ | |
| ctx, | |
| deviceId: targetDeviceId, | |
| tool: "create_workspace", | |
| params: { | |
| name: params.name, | |
| branchName: params.branchName, | |
| baseBranch: params.baseBranch, | |
| taskId: params.taskId, | |
| }, | |
| }); | |
| }, | |
| ); |
🤖 Prompt for AI Agents
In `@apps/api/src/lib/mcp/tools.ts` around lines 807 - 855, The create_workspace
tool currently accepts taskId but doesn't forward it to the device: update the
params passed to executeOnDevice in the server.tool handler (create_workspace)
to include params.taskId (or undefined when absent) so the device command
receives taskId; specifically, add taskId: params.taskId to the object passed as
params in the executeOnDevice call so the desktop can associate the workspace
with the task.
| export class AudioRecorder { | ||
| private mediaRecorder: MediaRecorder | null = null; | ||
| private chunks: Blob[] = []; | ||
|
|
||
| async start(): Promise<void> { | ||
| const stream = await navigator.mediaDevices.getUserMedia({ audio: true }); | ||
| this.mediaRecorder = new MediaRecorder(stream); | ||
| this.chunks = []; | ||
| this.mediaRecorder.ondataavailable = (e) => this.chunks.push(e.data); | ||
| this.mediaRecorder.start(); | ||
| } | ||
|
|
||
| async stop(): Promise<string> { | ||
| return new Promise((resolve) => { | ||
| this.mediaRecorder!.onstop = async () => { | ||
| const blob = new Blob(this.chunks, { type: "audio/webm" }); | ||
| const wav = await convertToWav(blob); | ||
| const base64 = await blobToBase64(wav); | ||
| resolve(base64); | ||
| }; | ||
| this.mediaRecorder!.stop(); | ||
| }); | ||
| } | ||
| } | ||
|
|
||
| // Convert to 16kHz WAV for WisprFlow | ||
| async function convertToWav(blob: Blob): Promise<Blob> { /* ... */ } | ||
| async function blobToBase64(blob: Blob): Promise<string> { /* ... */ } | ||
| ``` |
There was a problem hiding this comment.
Critical: Add resource cleanup and error handling.
The AudioRecorder implementation has several critical issues that must be addressed:
- Resource leak: MediaStream tracks are never stopped. After
getUserMedia, the stream tracks must be stopped when recording ends, otherwise the microphone remains active. - Missing error handling:
getUserMediacan reject if permissions are denied or hardware is unavailable. This will cause unhandled promise rejections. - No cancellation: If a user starts recording but the app closes or they navigate away, there's no cleanup mechanism.
🔧 Proposed fixes
export class AudioRecorder {
private mediaRecorder: MediaRecorder | null = null;
+ private stream: MediaStream | null = null;
private chunks: Blob[] = [];
async start(): Promise<void> {
- const stream = await navigator.mediaDevices.getUserMedia({ audio: true });
+ try {
+ this.stream = await navigator.mediaDevices.getUserMedia({ audio: true });
+ } catch (error) {
+ throw new Error(`Failed to access microphone: ${error.message}`);
+ }
+
+ if (!MediaRecorder.isTypeSupported('audio/webm')) {
+ throw new Error('Browser does not support audio recording');
+ }
+
- this.mediaRecorder = new MediaRecorder(stream);
+ this.mediaRecorder = new MediaRecorder(this.stream);
this.chunks = [];
this.mediaRecorder.ondataavailable = (e) => this.chunks.push(e.data);
this.mediaRecorder.start();
}
async stop(): Promise<string> {
return new Promise((resolve, reject) => {
this.mediaRecorder!.onstop = async () => {
- const blob = new Blob(this.chunks, { type: "audio/webm" });
- const wav = await convertToWav(blob);
- const base64 = await blobToBase64(wav);
- resolve(base64);
+ try {
+ const blob = new Blob(this.chunks, { type: "audio/webm" });
+ const wav = await convertToWav(blob);
+ const base64 = await blobToBase64(wav);
+ this.cleanup();
+ resolve(base64);
+ } catch (error) {
+ this.cleanup();
+ reject(error);
+ }
};
this.mediaRecorder!.stop();
});
}
+
+ private cleanup(): void {
+ if (this.stream) {
+ this.stream.getTracks().forEach(track => track.stop());
+ this.stream = null;
+ }
+ this.mediaRecorder = null;
+ this.chunks = [];
+ }
+
+ // Call this on component unmount
+ dispose(): void {
+ if (this.mediaRecorder?.state === 'recording') {
+ this.mediaRecorder.stop();
+ }
+ this.cleanup();
+ }
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export class AudioRecorder { | |
| private mediaRecorder: MediaRecorder | null = null; | |
| private chunks: Blob[] = []; | |
| async start(): Promise<void> { | |
| const stream = await navigator.mediaDevices.getUserMedia({ audio: true }); | |
| this.mediaRecorder = new MediaRecorder(stream); | |
| this.chunks = []; | |
| this.mediaRecorder.ondataavailable = (e) => this.chunks.push(e.data); | |
| this.mediaRecorder.start(); | |
| } | |
| async stop(): Promise<string> { | |
| return new Promise((resolve) => { | |
| this.mediaRecorder!.onstop = async () => { | |
| const blob = new Blob(this.chunks, { type: "audio/webm" }); | |
| const wav = await convertToWav(blob); | |
| const base64 = await blobToBase64(wav); | |
| resolve(base64); | |
| }; | |
| this.mediaRecorder!.stop(); | |
| }); | |
| } | |
| } | |
| // Convert to 16kHz WAV for WisprFlow | |
| async function convertToWav(blob: Blob): Promise<Blob> { /* ... */ } | |
| async function blobToBase64(blob: Blob): Promise<string> { /* ... */ } | |
| ``` | |
| export class AudioRecorder { | |
| private mediaRecorder: MediaRecorder | null = null; | |
| private stream: MediaStream | null = null; | |
| private chunks: Blob[] = []; | |
| async start(): Promise<void> { | |
| try { | |
| this.stream = await navigator.mediaDevices.getUserMedia({ audio: true }); | |
| } catch (error) { | |
| throw new Error(`Failed to access microphone: ${error.message}`); | |
| } | |
| if (!MediaRecorder.isTypeSupported('audio/webm')) { | |
| throw new Error('Browser does not support audio recording'); | |
| } | |
| this.mediaRecorder = new MediaRecorder(this.stream); | |
| this.chunks = []; | |
| this.mediaRecorder.ondataavailable = (e) => this.chunks.push(e.data); | |
| this.mediaRecorder.start(); | |
| } | |
| async stop(): Promise<string> { | |
| return new Promise((resolve, reject) => { | |
| this.mediaRecorder!.onstop = async () => { | |
| try { | |
| const blob = new Blob(this.chunks, { type: "audio/webm" }); | |
| const wav = await convertToWav(blob); | |
| const base64 = await blobToBase64(wav); | |
| this.cleanup(); | |
| resolve(base64); | |
| } catch (error) { | |
| this.cleanup(); | |
| reject(error); | |
| } | |
| }; | |
| this.mediaRecorder!.stop(); | |
| }); | |
| } | |
| private cleanup(): void { | |
| if (this.stream) { | |
| this.stream.getTracks().forEach(track => track.stop()); | |
| this.stream = null; | |
| } | |
| this.mediaRecorder = null; | |
| this.chunks = []; | |
| } | |
| // Call this on component unmount | |
| dispose(): void { | |
| if (this.mediaRecorder?.state === 'recording') { | |
| this.mediaRecorder.stop(); | |
| } | |
| this.cleanup(); | |
| } | |
| } | |
| // Convert to 16kHz WAV for WisprFlow | |
| async function convertToWav(blob: Blob): Promise<Blob> { /* ... */ } | |
| async function blobToBase64(blob: Blob): Promise<string> { /* ... */ } |
🤖 Prompt for AI Agents
In `@apps/desktop/VOICE_AGENT_PLAN.md` around lines 73 - 101, The AudioRecorder
lacks error handling, track cleanup and a cancellation path; update the class
(AudioRecorder) to store the MediaStream on the instance (e.g., this.stream),
wrap navigator.mediaDevices.getUserMedia in try/catch inside start() and
propagate or log errors, guard against null mediaRecorder in stop(), add an
explicit cancel()/dispose() method that stops and releases all MediaStreamTracks
and aborts an in-progress MediaRecorder, and ensure onstop/onerror handlers
always stop tracks and clean this.mediaRecorder/this.chunks; keep convertToWav
and blobToBase64 usage but ensure any exceptions from them are caught in stop()
and cause the promise to reject or resolve with a failure indicator.
| export function useVoiceCommand() { | ||
| const recorder = useRef(new AudioRecorder()); | ||
| const store = useVoiceStore(); | ||
|
|
||
| const startRecording = async () => { | ||
| store.startRecording(); | ||
| await recorder.current.start(); | ||
| }; | ||
|
|
||
| const stopRecording = async () => { | ||
| store.stopRecording(); | ||
| store.setProcessing(true); | ||
|
|
||
| const audioBase64 = await recorder.current.stop(); | ||
|
|
||
| // Subscribe to agent stream | ||
| electronTrpc.voice.runAgent.subscribe( | ||
| { audioBase64 }, | ||
| { | ||
| onData: (msg) => store.addMessage(msg), | ||
| onError: (err) => store.setError(err.message), | ||
| onComplete: () => store.setProcessing(false), | ||
| } | ||
| ); | ||
| }; | ||
|
|
||
| // Bind to hotkey | ||
| useAppHotkey("VOICE_PUSH_TO_TALK", { | ||
| onKeyDown: startRecording, | ||
| onKeyUp: stopRecording, | ||
| }); | ||
| } | ||
| ``` |
There was a problem hiding this comment.
Critical: Add subscription cleanup and handle component unmount.
The useVoiceCommand hook has critical resource management issues:
- Subscription leak: The tRPC subscription is created but never unsubscribed. Each recording creates a new subscription that persists forever.
- No unmount cleanup: The
AudioRecorderinstance is never disposed, leaving media streams active. - Missing import:
electronTrpcis referenced but not imported. - No error handling: Async operations can fail silently.
🔧 Proposed fixes
+import { useEffect, useRef } from 'react';
+import { electronTrpc } from '@/renderer/lib/trpc';
+
export function useVoiceCommand() {
const recorder = useRef(new AudioRecorder());
const store = useVoiceStore();
+ const subscriptionRef = useRef<{ unsubscribe: () => void } | null>(null);
const startRecording = async () => {
- store.startRecording();
- await recorder.current.start();
+ try {
+ store.startRecording();
+ await recorder.current.start();
+ } catch (error) {
+ store.setError(error.message);
+ }
};
const stopRecording = async () => {
- store.stopRecording();
- store.setProcessing(true);
+ try {
+ store.stopRecording();
+ store.setProcessing(true);
- const audioBase64 = await recorder.current.stop();
+ const audioBase64 = await recorder.current.stop();
- // Subscribe to agent stream
- electronTrpc.voice.runAgent.subscribe(
- { audioBase64 },
- {
- onData: (msg) => store.addMessage(msg),
- onError: (err) => store.setError(err.message),
- onComplete: () => store.setProcessing(false),
- }
- );
+ // Clean up previous subscription
+ if (subscriptionRef.current) {
+ subscriptionRef.current.unsubscribe();
+ }
+
+ // Subscribe to agent stream
+ subscriptionRef.current = electronTrpc.voice.runAgent.subscribe(
+ { audioBase64 },
+ {
+ onData: (msg) => store.addMessage(msg),
+ onError: (err) => {
+ store.setError(err.message);
+ subscriptionRef.current = null;
+ },
+ onComplete: () => {
+ store.setProcessing(false);
+ subscriptionRef.current = null;
+ },
+ }
+ );
+ } catch (error) {
+ store.setError(error.message);
+ store.setProcessing(false);
+ }
};
+ // Cleanup on unmount
+ useEffect(() => {
+ return () => {
+ if (subscriptionRef.current) {
+ subscriptionRef.current.unsubscribe();
+ }
+ recorder.current.dispose();
+ };
+ }, []);
+
// Bind to hotkey
useAppHotkey("VOICE_PUSH_TO_TALK", {
onKeyDown: startRecording,
onKeyUp: stopRecording,
});
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function useVoiceCommand() { | |
| const recorder = useRef(new AudioRecorder()); | |
| const store = useVoiceStore(); | |
| const startRecording = async () => { | |
| store.startRecording(); | |
| await recorder.current.start(); | |
| }; | |
| const stopRecording = async () => { | |
| store.stopRecording(); | |
| store.setProcessing(true); | |
| const audioBase64 = await recorder.current.stop(); | |
| // Subscribe to agent stream | |
| electronTrpc.voice.runAgent.subscribe( | |
| { audioBase64 }, | |
| { | |
| onData: (msg) => store.addMessage(msg), | |
| onError: (err) => store.setError(err.message), | |
| onComplete: () => store.setProcessing(false), | |
| } | |
| ); | |
| }; | |
| // Bind to hotkey | |
| useAppHotkey("VOICE_PUSH_TO_TALK", { | |
| onKeyDown: startRecording, | |
| onKeyUp: stopRecording, | |
| }); | |
| } | |
| ``` | |
| import { useEffect, useRef } from 'react'; | |
| import { electronTrpc } from '@/renderer/lib/trpc'; | |
| export function useVoiceCommand() { | |
| const recorder = useRef(new AudioRecorder()); | |
| const store = useVoiceStore(); | |
| const subscriptionRef = useRef<{ unsubscribe: () => void } | null>(null); | |
| const startRecording = async () => { | |
| try { | |
| store.startRecording(); | |
| await recorder.current.start(); | |
| } catch (error) { | |
| store.setError(error.message); | |
| } | |
| }; | |
| const stopRecording = async () => { | |
| try { | |
| store.stopRecording(); | |
| store.setProcessing(true); | |
| const audioBase64 = await recorder.current.stop(); | |
| // Clean up previous subscription | |
| if (subscriptionRef.current) { | |
| subscriptionRef.current.unsubscribe(); | |
| } | |
| // Subscribe to agent stream | |
| subscriptionRef.current = electronTrpc.voice.runAgent.subscribe( | |
| { audioBase64 }, | |
| { | |
| onData: (msg) => store.addMessage(msg), | |
| onError: (err) => { | |
| store.setError(err.message); | |
| subscriptionRef.current = null; | |
| }, | |
| onComplete: () => { | |
| store.setProcessing(false); | |
| subscriptionRef.current = null; | |
| }, | |
| } | |
| ); | |
| } catch (error) { | |
| store.setError(error.message); | |
| store.setProcessing(false); | |
| } | |
| }; | |
| // Cleanup on unmount | |
| useEffect(() => { | |
| return () => { | |
| if (subscriptionRef.current) { | |
| subscriptionRef.current.unsubscribe(); | |
| } | |
| recorder.current.dispose(); | |
| }; | |
| }, []); | |
| // Bind to hotkey | |
| useAppHotkey("VOICE_PUSH_TO_TALK", { | |
| onKeyDown: startRecording, | |
| onKeyUp: stopRecording, | |
| }); | |
| } |
🤖 Prompt for AI Agents
In `@apps/desktop/VOICE_AGENT_PLAN.md` around lines 265 - 297, The hook
useVoiceCommand leaks resources and lacks imports/error handling; fix by
importing electronTrpc, wrap async operations in startRecording and
stopRecording with try/catch/finally to call store.setError and ensure
store.setProcessing is toggled correctly, store the result of
electronTrpc.voice.runAgent.subscribe in a ref (e.g., agentSubRef) and call
agentSubRef.current?.unsubscribe() when stopping and in a useEffect cleanup, and
dispose the AudioRecorder (recorder.current.dispose() or equivalent) in the same
cleanup to stop media tracks; update startRecording/stopRecording to guard
against concurrent calls (no-op if already recording) and ensure
onError/onComplete still update store state.
| export function canRunTool(type: DeviceType, tool: string): boolean { | ||
| const toolRequirements: Record<string, string[]> = { | ||
| createWorktree: ["worktree"], | ||
| switchWorkspace: ["filesystem"], | ||
| listWorktrees: ["filesystem"], | ||
| getCurrentContext: ["filesystem"], | ||
| runTerminalCommand: ["terminal"], | ||
| searchCode: ["filesystem"], | ||
| }; | ||
| const required = toolRequirements[tool] ?? []; | ||
| const capabilities = getDeviceCapabilities(type); | ||
| return required.every((cap) => capabilities.includes(cap)); |
There was a problem hiding this comment.
Default-deny unknown tools to avoid unsupported execution.
toolRequirements[tool] ?? [] allows any unknown tool on all devices, which can misroute new tools to unsupported clients. Recommend default-deny and move requirements to a top-level constant. As per coding guidelines, prefer object params for functions with 2+ parameters.
✅ Safer default + object params
+const TOOL_REQUIREMENTS: Record<string, readonly string[]> = {
+ createWorktree: ["worktree"],
+ switchWorkspace: ["filesystem"],
+ listWorktrees: ["filesystem"],
+ getCurrentContext: ["filesystem"],
+ runTerminalCommand: ["terminal"],
+ searchCode: ["filesystem"],
+};
-export function canRunTool(type: DeviceType, tool: string): boolean {
- const toolRequirements: Record<string, string[]> = {
- createWorktree: ["worktree"],
- switchWorkspace: ["filesystem"],
- listWorktrees: ["filesystem"],
- getCurrentContext: ["filesystem"],
- runTerminalCommand: ["terminal"],
- searchCode: ["filesystem"],
- };
- const required = toolRequirements[tool] ?? [];
- const capabilities = getDeviceCapabilities(type);
- return required.every((cap) => capabilities.includes(cap));
-}
+export function canRunTool({
+ type,
+ tool,
+}: {
+ type: DeviceType;
+ tool: string;
+}): boolean {
+ const required = TOOL_REQUIREMENTS[tool];
+ if (!required) return false;
+ const capabilities = getDeviceCapabilities(type);
+ return required.every((cap) => capabilities.includes(cap));
+}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| export function canRunTool(type: DeviceType, tool: string): boolean { | |
| const toolRequirements: Record<string, string[]> = { | |
| createWorktree: ["worktree"], | |
| switchWorkspace: ["filesystem"], | |
| listWorktrees: ["filesystem"], | |
| getCurrentContext: ["filesystem"], | |
| runTerminalCommand: ["terminal"], | |
| searchCode: ["filesystem"], | |
| }; | |
| const required = toolRequirements[tool] ?? []; | |
| const capabilities = getDeviceCapabilities(type); | |
| return required.every((cap) => capabilities.includes(cap)); | |
| const TOOL_REQUIREMENTS: Record<string, readonly string[]> = { | |
| createWorktree: ["worktree"], | |
| switchWorkspace: ["filesystem"], | |
| listWorktrees: ["filesystem"], | |
| getCurrentContext: ["filesystem"], | |
| runTerminalCommand: ["terminal"], | |
| searchCode: ["filesystem"], | |
| }; | |
| export function canRunTool({ | |
| type, | |
| tool, | |
| }: { | |
| type: DeviceType; | |
| tool: string; | |
| }): boolean { | |
| const required = TOOL_REQUIREMENTS[tool]; | |
| if (!required) return false; | |
| const capabilities = getDeviceCapabilities(type); | |
| return required.every((cap) => capabilities.includes(cap)); | |
| } |
🤖 Prompt for AI Agents
In `@packages/db/src/schema/agent.ts` around lines 198 - 209, canRunTool currently
defines toolRequirements inline and uses toolRequirements[tool] ?? [] which
permits unknown tools; change to a top-level constant (e.g., TOOL_REQUIREMENTS:
Record<string,string[]>) and update canRunTool to accept an object param (e.g.,
canRunTool({ type, tool })) per the style guide; in the function, look up
requirements with TOOL_REQUIREMENTS[tool] and if undefined return false
(default-deny) otherwise call getDeviceCapabilities(type) and verify every
required capability is present. Ensure you update all call sites to the new
object-parameter signature.
| updateCommand: protectedProcedure | ||
| .input( | ||
| z.object({ | ||
| id: z.string().uuid(), | ||
| status: z.enum(commandStatusValues), | ||
| claimedBy: z.string().optional(), | ||
| claimedAt: z.date().optional(), | ||
| result: z.record(z.string(), z.unknown()).optional(), | ||
| error: z.string().optional(), | ||
| executedAt: z.date().optional(), | ||
| }), | ||
| ) | ||
| .mutation(async ({ ctx, input }) => { | ||
| console.log("[agent.updateCommand] Received:", { | ||
| id: input.id, | ||
| status: input.status, | ||
| hasResult: !!input.result, | ||
| hasError: !!input.error, | ||
| }); | ||
|
|
||
| const organizationId = ctx.session.session.activeOrganizationId; | ||
| if (!organizationId) { | ||
| throw new TRPCError({ | ||
| code: "BAD_REQUEST", | ||
| message: "No active organization selected", | ||
| }); | ||
| } | ||
|
|
||
| // Verify the command exists and belongs to this org | ||
| const [existingCommand] = await db | ||
| .select() | ||
| .from(agentCommands) | ||
| .where( | ||
| and( | ||
| eq(agentCommands.id, input.id), | ||
| eq(agentCommands.organizationId, organizationId), | ||
| ), | ||
| ); | ||
|
|
||
| if (!existingCommand) { | ||
| console.error("[agent.updateCommand] Command not found:", input.id); | ||
| throw new TRPCError({ | ||
| code: "NOT_FOUND", | ||
| message: "Command not found", | ||
| }); | ||
| } | ||
|
|
||
| console.log("[agent.updateCommand] Existing status:", existingCommand.status); | ||
|
|
||
| // Build update object | ||
| const updateData: Partial<typeof agentCommands.$inferInsert> = { | ||
| status: input.status, | ||
| }; | ||
|
|
||
| if (input.claimedBy !== undefined) { | ||
| updateData.claimedBy = input.claimedBy; | ||
| } | ||
| if (input.claimedAt !== undefined) { | ||
| updateData.claimedAt = input.claimedAt; | ||
| } | ||
| if (input.result !== undefined) { | ||
| updateData.result = input.result; | ||
| } | ||
| if (input.error !== undefined) { | ||
| updateData.error = input.error; | ||
| } | ||
| if (input.executedAt !== undefined) { | ||
| updateData.executedAt = input.executedAt; | ||
| } | ||
|
|
||
| const [updated] = await db | ||
| .update(agentCommands) | ||
| .set(updateData) | ||
| .where(eq(agentCommands.id, input.id)) | ||
| .returning(); | ||
|
|
||
| console.log("[agent.updateCommand] Updated to status:", updated?.status); | ||
|
|
||
| return { command: updated, txid: BigInt(Date.now()) }; | ||
| }), |
There was a problem hiding this comment.
Add an authorization guard before updating commands.
Right now any authenticated org member can update any command in that org. Lock this down to the command creator (or the owning device/user) to prevent cross-user tampering.
🔒 Suggested guard
if (!existingCommand) {
console.error("[agent.updateCommand] Command not found:", input.id);
throw new TRPCError({
code: "NOT_FOUND",
message: "Command not found",
});
}
+
+ if (existingCommand.userId !== ctx.session.user.id) {
+ throw new TRPCError({
+ code: "FORBIDDEN",
+ message: "Not authorized to update this command",
+ });
+ }🤖 Prompt for AI Agents
In `@packages/trpc/src/router/agent/agent.ts` around lines 163 - 242, Add an
authorization guard in updateCommand that ensures only the command's
owner/creator or the owning device can update it: after fetching existingCommand
(from agentCommands) grab the current principal (e.g. ctx.session.session.userId
and/or ctx.session.session.deviceId), then compare against the command ownership
fields on existingCommand (e.g. existingCommand.createdBy,
existingCommand.ownerId, or existingCommand.claimedBy depending on your schema);
if none match, throw a TRPCError with code "FORBIDDEN". Place this check before
building updateData and the db.update call in the updateCommand mutation (use
the existingCommand and ctx.session values to locate the logic). Ensure tests or
callers that expect updates use an authorized principal.
There was a problem hiding this comment.
Actionable comments posted: 16
🤖 Fix all issues with AI agents
In `@apps/api/src/lib/mcp/tools.ts`:
- Around line 92-118: The slug generation can produce an empty string and the
uniqueness check only queries organization-scoped slugs while the DB uniqueness
is global; update the logic around slug/uniqueSlug (the slug variable and the
uniqueness loop that uses existingTasks, existingSlugs, db and tasks) to: 1)
provide a deterministic fallback when slug === "" (e.g., "untitled" + short
random/ts token) before any DB checks, and 2) ensure uniqueness against global
slugs by querying without the org filter (or by catching a global
unique-constraint DB error on insert) when building existingTasks/existingSlugs
so the while loop de-duplicates globally rather than only per-organization.
- Around line 17-23: The function registerMcpTools currently takes two
positional parameters (McpServer, McpContext); change its signature to accept a
single params object (e.g. { server: McpServer, ctx: McpContext }) and update
the function body to destructure server and ctx where used. Also update the one
call site in apps/api/src/app/api/mcp/route.ts to pass an object with those
named properties instead of two positional args, and update type annotations
accordingly so TypeScript infers the new shape.
In
`@apps/desktop/src/renderer/routes/_authenticated/hooks/tools/create-worktree.ts`:
- Around line 32-46: After the fallback logic that computes projectId from
workspaces, add a guard that verifies projectId is defined before calling
ctx.createWorktree.mutateAsync; if projectId is still null/undefined, throw or
return a clear error (e.g., an Error with a descriptive message) so the mutation
is not invoked with an invalid projectId. Update the create-worktree flow around
the symbols projectId, workspaces, and ctx.createWorktree.mutateAsync to perform
this early check and fail fast with a readable error.
In
`@apps/desktop/src/renderer/routes/_authenticated/hooks/tools/get-app-context.ts`:
- Around line 11-16: The code currently extracts currentWorkspaceId from
pathname which can include a query string; update the logic that computes
pathname (after removing the leading '#') to strip any query string or search
portion before running the workspace regex (e.g., trim everything from the first
'?' onward), then run the existing match (/\/workspace\/([^/]+)/) to set
workspaceMatch/currentWorkspaceId so the captured ID never includes query
parameters.
In `@apps/desktop/src/renderer/routes/_authenticated/hooks/tools/index.ts`:
- Around line 40-47: The validation error handling uses Zod's parsed from
tool.schema.safeParse(params ?? {}) and references parsed.error.errors which
doesn't exist in Zod 4; update the returned error construction to use
parsed.error.issues instead (i.e., replace references to parsed.error.errors
with parsed.error.issues when building the error message) so the code in the
parsed.success false branch (the safeParse result handling) works with Zod
4.3.5.
In `@apps/desktop/src/renderer/routes/_authenticated/hooks/useDevicePresence.ts`:
- Around line 16-24: The heartbeat call in useDevicePresence
(apiTrpcClient.device.heartbeat.mutate) is currently swallowing errors; change
the catch to capture the error (catch (err)) and log it with contextual prefix
and device details (e.g., deviceInfo.deviceId / deviceInfo.deviceName) instead
of ignoring it. Use the existing app logger if available, otherwise call
console.error with a clear message like "device heartbeat failed" plus the
device identifiers and the error object to keep connectivity/auth failures
observable.
In
`@apps/desktop/src/renderer/routes/_authenticated/providers/CollectionsProvider/collections.ts`:
- Around line 216-231: The client-side onUpdate handler for agentCommands should
not fabricate txids or coerce BigInt to Number; update the server-side
agent.updateCommand to wrap its update in a transaction and return the real
transaction id via getCurrentTxid(tx) (matching how task/repository routers do
it), then change the client onUpdate (the function using
apiClient.agent.updateCommand.mutate) to remove the Date.now() fallback and the
Number(...) conversion so it expects and uses the BigInt txid returned by the
server consistently.
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/api-keys/components/ApiKeysSettings/ApiKeysSettings.tsx`:
- Around line 110-117: handleRevokeKey currently revokes immediately; add a user
confirmation step before calling apiTrpcClient.apiKeys.revoke.mutate. Update the
handleRevokeKey function to prompt the user (e.g., using window.confirm or the
app's confirmation modal) with a clear irreversible-warning message, and only
call apiTrpcClient.apiKeys.revoke.mutate({ id }) and then loadApiKeys() if the
user confirms; if the user cancels, simply return without making the API call or
logging an error.
- Around line 119-123: handleCopyKey currently calls
navigator.clipboard.writeText(newKeyValue) without error handling and uses the
magic number 2000; wrap the clipboard call in a try/catch (or make the handler
async) to catch and log/report errors (e.g., console.error or set an error
state) and only setCopied(true) on success, and extract the timeout 2000 into a
named constant like COPY_FEEDBACK_DURATION_MS used in setTimeout(() =>
setCopied(false), COPY_FEEDBACK_DURATION_MS) so failures are handled gracefully
and the duration is centralized; update references to handleCopyKey,
newKeyValue, and setCopied accordingly.
- Around line 259-261: The Enter key handler for the API key generation should
respect the in-progress state to avoid double submissions: update the onKeyDown
handler in ApiKeysSettings to check the isGenerating flag before calling
handleGenerateKey (e.g., only call handleGenerateKey when e.key === "Enter" &&
!isGenerating) so rapid Enter presses are ignored while generation is underway.
In `@apps/mobile/hooks/useDevicePresence/useDevicePresence.ts`:
- Around line 12-20: The SecureStore calls in getOrCreateDeviceId
(SecureStore.getItemAsync and SecureStore.setItemAsync) currently swallow
errors; update those catch handlers to log the caught error with a prefixed
context like "[device-presence]" at a low-noise level (debug or warn) instead of
silently ignoring it, and do the same for the heartbeat error handlers in this
module (the heartbeat/send registration functions that currently
catch-and-ignore errors). Ensure the log includes the failing function name and
the error object, and decide per callsite whether to let the error propagate
after logging or return a safe fallback so behavior remains explicit.
In `@apps/mobile/package.json`:
- Around line 33-36: Update the expo-crypto dependency in package.json from a
caret range to the Expo SDK 54-compatible tilde range: replace "expo-crypto":
"^15.0.8" with "expo-crypto": "~15.0.8" so the project uses the exact minor
version aligned with SDK 54 (locate the dependency entry "expo-crypto" in
package.json and update its version string).
In `@packages/db/drizzle/0012_add_mcp_actions_and_presence.sql`:
- Line 31: The migration defines "usage_count" as text but it should be numeric;
update the column definition in 0012_add_mcp_actions_and_presence.sql to use an
integer (or bigint if large counts expected) with DEFAULT 0 NOT NULL for proper
numeric ordering and atomic increments, and add a safe conversion for existing
data (CAST from text to integer) in the migration so existing rows with numeric
strings are migrated; also review any related INSERT/UPDATE statements or
application code that treat usage_count as text and change them to use numeric
operations (e.g., increments) against the usage_count column.
In `@packages/db/src/schema/schema.ts`:
- Around line 221-246: The unique index defined on devicePresence
(uniqueIndex("device_presence_user_device_idx")) currently only covers
table.userId and table.deviceId causing cross-organization collisions; update
that unique index to include table.organizationId as well (and optionally rename
the index to something like "device_presence_user_org_device_idx") so the
uniqueness is enforced per (userId, organizationId, deviceId) within the
devicePresence table.
- Around line 299-312: The usageCount column is defined as text which prevents
correct numeric ordering and arithmetic; update the schema definition for
usageCount (symbol: usageCount) to use an integer column instead of text and set
a numeric default (e.g., change text("usage_count").notNull().default("0") to an
integer column with .notNull() and a numeric default like 0). Ensure any code
that reads/writes usageCount (column name "usage_count") expects an integer type
after this change.
- Around line 254-289: Add a self-referencing foreign key for parentCommandId
and constrain targetDeviceType to the existing device-type enum: change
parentCommandId definition in agentCommands to reference the agentCommands.id
with onDelete: "cascade" (e.g., parentCommandId:
uuid("parent_command_id").references(() => agentCommands.id, { onDelete:
"cascade" })), and replace targetDeviceType: text("target_device_type") with the
project’s device-type enum column (e.g., targetDeviceType:
deviceType("target_device_type") or the existing enum helper used elsewhere)
keeping nullable semantics as before so device type is validated against the
same enum used elsewhere.
♻️ Duplicate comments (2)
apps/api/src/lib/mcp/tools.ts (2)
213-230: Guard against empty updates before calling.set({}).Still missing the empty-update validation that prevents Drizzle from throwing on
set({}).Drizzle ORM 0.45.1 behavior when calling update().set({}) with an empty object
821-867: Forward taskId to the device command (still missing).
taskIdis accepted by the tool schema but not forwarded, so the desktop can’t associate the workspace with the task.
🧹 Nitpick comments (14)
apps/desktop/src/renderer/routes/_authenticated/settings/devices/components/DevicesSettings/DevicesSettings.tsx (2)
31-49: Extract polling interval to a named constant and add error logging.Per coding guidelines:
- Magic numbers should be extracted to named constants at module top
- Errors should be logged with context, not just stored in state
♻️ Suggested improvement
Add at module level (after line 24):
const DEVICE_POLLING_INTERVAL_MS = 10_000;Update
fetchDevicesto log errors:const fetchDevices = useCallback(async () => { try { setLoading(true); setError(null); const result = await apiTrpcClient.device.listOnlineDevices.query(); setDevices(result); } catch (err) { + console.error("[DevicesSettings/fetchDevices] Failed to fetch devices:", err); setError(err instanceof Error ? err.message : "Failed to fetch devices"); } finally { setLoading(false); } }, []);Update the interval reference:
-const interval = setInterval(fetchDevices, 10_000); +const interval = setInterval(fetchDevices, DEVICE_POLLING_INTERVAL_MS);
51-57: Consider adding hours handling for better time display.The function jumps from minutes directly to
toLocaleTimeString()for anything >= 60 minutes. Adding an hours case would provide more meaningful relative times.♻️ Optional enhancement
const formatLastSeen = (date: Date) => { const seconds = Math.floor((Date.now() - new Date(date).getTime()) / 1000); if (seconds < 60) return `${seconds}s ago`; const minutes = Math.floor(seconds / 60); if (minutes < 60) return `${minutes}m ago`; + const hours = Math.floor(minutes / 60); + if (hours < 24) return `${hours}h ago`; return new Date(date).toLocaleTimeString(); };apps/mobile/hooks/useDevicePresence/useDevicePresence.ts (1)
9-11: Extract hardcoded device strings to module constants.Move
"mobile","iPhone", and"Android"to named constants for clarity and consistency. As per coding guidelines.♻️ Suggested refactor
const HEARTBEAT_INTERVAL_MS = 30_000; const DEVICE_ID_KEY = "superset-device-id"; +const DEVICE_TYPE_MOBILE = "mobile"; +const DEFAULT_DEVICE_NAME_IOS = "iPhone"; +const DEFAULT_DEVICE_NAME_ANDROID = "Android"; @@ await apiClient.device.heartbeat.mutate({ deviceId, deviceName: Constants.deviceName ?? - (Platform.OS === "ios" ? "iPhone" : "Android"), - deviceType: "mobile", + (Platform.OS === "ios" + ? DEFAULT_DEVICE_NAME_IOS + : DEFAULT_DEVICE_NAME_ANDROID), + deviceType: DEVICE_TYPE_MOBILE, });Also applies to: 41-45
apps/mobile/screens/(authenticated)/index/HomeScreen.tsx (2)
34-53: Extract the poll interval and reuse it in the UI + tighten log prefix.
Keeps polling and UI copy in sync and aligns logging prefix with the required pattern.♻️ Suggested update
+const DEVICE_POLL_INTERVAL_MS = 10_000; +const DEVICE_POLL_INTERVAL_LABEL = `${DEVICE_POLL_INTERVAL_MS / 1000}s`; const fetchDevices = useCallback(async () => { try { setDevicesLoading(true); const result = await apiClient.device.listOnlineDevices.query(); setDevices(result); } catch (err) { - console.warn("[devices] Failed to fetch:", err); + console.warn("[devices/fetch] Failed to fetch devices:", err); } finally { setDevicesLoading(false); } }, []); useEffect(() => { fetchDevices(); - const interval = setInterval(fetchDevices, 10_000); + const interval = setInterval(fetchDevices, DEVICE_POLL_INTERVAL_MS); return () => clearInterval(interval); }, [fetchDevices]); <CardDescription> - Devices connected to your account (refreshes every 10s) + Devices connected to your account (refreshes every {DEVICE_POLL_INTERVAL_LABEL}) </CardDescription>As per coding guidelines, extract magic numbers and use the prescribed log prefix format.
Also applies to: 104-106
19-28: Consider derivingOnlineDevicefrom the API output to avoid drift.
The local interface is missing thecreatedAtfield returned bylistOnlineDevices.query(). UseRouterOutputs["device"]["listOnlineDevices"][number]or infer the type directly from the API client to keep the client type automatically aligned with the server response.♻️ Possible refactor
-interface OnlineDevice { - id: string; - deviceId: string; - deviceName: string; - deviceType: "desktop" | "mobile" | "web"; - lastSeenAt: Date; - ownerId: string; - ownerName: string; - ownerEmail: string; -} +import type { RouterOutputs } from "@superset/trpc"; + +type OnlineDevice = RouterOutputs["device"]["listOnlineDevices"][number];packages/db/drizzle/0012_add_mcp_actions_and_presence.sql (1)
7-8: Constraintarget_device_typeto thedevice_typeenum.Using
textallows invalid values and weakens data integrity.💡 Suggested fix
- "target_device_type" text, + "target_device_type" "device_type",apps/desktop/src/renderer/routes/_authenticated/hooks/tools/list-workspaces.ts (1)
16-19: Consider explicit field mapping instead of unsafe cast.The
as unknown as Record<string, unknown>[]cast bypasses type safety and is inconsistent withlist-projects.tswhich explicitly maps fields. Explicit mapping provides better control over the API surface and avoids accidentally exposing internal fields.♻️ Suggested refactor to match list-projects pattern
return { success: true, - data: { workspaces: workspaces as unknown as Record<string, unknown>[] }, + data: { + workspaces: workspaces.map((ws) => ({ + id: ws.id, + name: ws.name, + branch: ws.branch, + // ... other fields you want to expose + })), + }, };apps/api/src/app/api/mcp/route.ts (1)
133-145: Consider restricting CORS origin for production.The
Access-Control-Allow-Origin: "*"allows any origin to call this API. While this may be intentional for MCP client compatibility, consider restricting to known origins in production or making it configurable via environment variable.♻️ Suggested improvement
+const ALLOWED_ORIGIN = process.env.MCP_CORS_ORIGIN ?? "*"; + export async function OPTIONS(): Promise<Response> { return new Response(null, { status: 204, headers: { - "Access-Control-Allow-Origin": "*", + "Access-Control-Allow-Origin": ALLOWED_ORIGIN, "Access-Control-Allow-Methods": "GET, POST, DELETE, OPTIONS",apps/desktop/src/renderer/routes/_authenticated/hooks/tools/types.ts (1)
26-31: Consider using object parameter forexecutesignature.Per coding guidelines, functions with 2+ parameters should accept a single params object. While this works, using named properties improves clarity and extensibility.
♻️ Suggested improvement
export interface ToolDefinition<T extends z.ZodType> { name: string; schema: T; - execute: (params: z.infer<T>, ctx: ToolContext) => Promise<CommandResult>; + execute: (args: { params: z.infer<T>; ctx: ToolContext }) => Promise<CommandResult>; }This would require updating all tool implementations but provides better self-documentation and future extensibility (e.g., adding
signalfor cancellation).packages/trpc/src/router/device/device.ts (1)
63-67: Inconsistent error handling between procedures.
heartbeatthrows aTRPCErrorwhen no active organization is set (line 26-29), butlistOnlineDevicesreturns an empty array. This inconsistency could confuse clients. Consider throwing in both cases or documenting the different behaviors.♻️ Option 1: Make listOnlineDevices throw for consistency
listOnlineDevices: protectedProcedure.query(async ({ ctx }) => { const organizationId = ctx.session.session.activeOrganizationId; if (!organizationId) { - return []; + throw new TRPCError({ + code: "BAD_REQUEST", + message: "No active organization selected", + }); }apps/api/src/lib/mcp/tools.ts (2)
524-568: Avoid duplicating the list_members query shape.You can build a single query and append the optional search condition to reduce duplication.
♻️ Suggested refactor
- let query = db - .select({ - id: users.id, - name: users.name, - email: users.email, - image: users.image, - role: members.role, - }) - .from(members) - .innerJoin(users, eq(members.userId, users.id)) - .where(and(...conditions)) - .limit(params.limit); - - if (params.search) { - query = db - .select({ - id: users.id, - name: users.name, - email: users.email, - image: users.image, - role: members.role, - }) - .from(members) - .innerJoin(users, eq(members.userId, users.id)) - .where( - and( - ...conditions, - or( - ilike(users.name, `%${params.search}%`), - ilike(users.email, `%${params.search}%`), - ), - ), - ) - .limit(params.limit); - } - - const membersList = await query; + if (params.search) { + conditions.push( + or( + ilike(users.name, `%${params.search}%`), + ilike(users.email, `%${params.search}%`), + ), + ); + } + + const membersList = await db + .select({ + id: users.id, + name: users.name, + email: users.email, + image: users.image, + role: members.role, + }) + .from(members) + .innerJoin(users, eq(members.userId, users.id)) + .where(and(...conditions)) + .limit(params.limit);
612-655: Extract the “recently offline” window into a named constant.Avoid magic multipliers in-line for clarity and reuse.
♻️ Suggested refactor
-const DEVICE_ONLINE_THRESHOLD_MS = 60_000; // 60 seconds +const DEVICE_ONLINE_THRESHOLD_MS = 60_000; // 60 seconds +const RECENTLY_OFFLINE_WINDOW_MS = DEVICE_ONLINE_THRESHOLD_MS * 10; // 10 minutes ... - const offlineThreshold = new Date( - Date.now() - DEVICE_ONLINE_THRESHOLD_MS * 10, - ); // 10 minutes for recently offline + const offlineThreshold = new Date( + Date.now() - RECENTLY_OFFLINE_WINDOW_MS, + );As per coding guidelines, extract magic numbers to named constants at module top.
apps/desktop/src/renderer/routes/_authenticated/hooks/useCommandWatcher.ts (2)
76-81: Use a params object forprocessCommand.Positional args make the call site less self-documenting and harder to extend.
♻️ Proposed refactor
+type ProcessCommandParams = { + commandId: string; + tool: string; + params: Record<string, unknown> | null; +}; + const processCommand = useCallback( - async ( - commandId: string, - tool: string, - params: Record<string, unknown> | null, - ) => { + async ({ commandId, tool, params }: ProcessCommandParams) => { if (processingCommands.has(commandId)) return; @@ - for (const cmd of commandsForThisDevice) { - processCommand(cmd.id, cmd.tool, cmd.params); - } + for (const cmd of commandsForThisDevice) { + processCommand({ + commandId: cmd.id, + tool: cmd.tool, + params: cmd.params, + }); + }As per coding guidelines, functions with 2+ parameters should accept a single params object.
Also applies to: 165-167
16-16: Extract the 100ms delay into a named constant.This removes a magic number and makes future tuning safer.
♻️ Proposed refactor
const processingCommands = new Set<string>(); +const COMMAND_STATUS_DELAY_MS = 100; @@ - await new Promise((resolve) => setTimeout(resolve, 100)); + await new Promise((resolve) => + setTimeout(resolve, COMMAND_STATUS_DELAY_MS), + ); @@ - await new Promise((resolve) => setTimeout(resolve, 100)); + await new Promise((resolve) => + setTimeout(resolve, COMMAND_STATUS_DELAY_MS), + );As per coding guidelines, extract magic numbers to named constants.
Also applies to: 94-103
| const DEVICE_ONLINE_THRESHOLD_MS = 60_000; // 60 seconds | ||
|
|
||
| /** | ||
| * Register all MCP tools on the server | ||
| */ | ||
| export function registerMcpTools(server: McpServer, ctx: McpContext) { | ||
| // ======================================== |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, verify the file exists and check the actual code at lines 17-23
cat -n apps/api/src/lib/mcp/tools.ts | sed -n '1,30p'Repository: superset-sh/superset
Length of output: 1133
🏁 Script executed:
# Search for all call sites of registerMcpTools
rg -n "registerMcpTools" --type ts --type tsxRepository: superset-sh/superset
Length of output: 91
🏁 Script executed:
# Search for all call sites of registerMcpTools
rg "registerMcpTools" -t ts -A 2 -B 2Repository: superset-sh/superset
Length of output: 929
Refactor registerMcpTools to accept a params object instead of positional parameters.
Per TypeScript guidelines, functions with 2+ parameters should accept a single object parameter for better self-documentation and extensibility. Update the function definition and the one call site in apps/api/src/app/api/mcp/route.ts.
Suggested changes
-export function registerMcpTools(server: McpServer, ctx: McpContext) {
+export function registerMcpTools({
+ server,
+ ctx,
+}: { server: McpServer; ctx: McpContext }) {Call site in apps/api/src/app/api/mcp/route.ts:
- registerMcpTools(server, ctx);
+ registerMcpTools({ server, ctx });🤖 Prompt for AI Agents
In `@apps/api/src/lib/mcp/tools.ts` around lines 17 - 23, The function
registerMcpTools currently takes two positional parameters (McpServer,
McpContext); change its signature to accept a single params object (e.g. {
server: McpServer, ctx: McpContext }) and update the function body to
destructure server and ctx where used. Also update the one call site in
apps/api/src/app/api/mcp/route.ts to pass an object with those named properties
instead of two positional args, and update type annotations accordingly so
TypeScript infers the new shape.
| // Generate slug from title | ||
| const slug = params.title | ||
| .toLowerCase() | ||
| .replace(/[^a-z0-9]+/g, "-") | ||
| .replace(/^-|-$/g, "") | ||
| .slice(0, 50); | ||
|
|
||
| // Check for existing slug and make unique | ||
| const existingTasks = await db | ||
| .select({ slug: tasks.slug }) | ||
| .from(tasks) | ||
| .where( | ||
| and( | ||
| eq(tasks.organizationId, ctx.organizationId), | ||
| ilike(tasks.slug, `${slug}%`), | ||
| ), | ||
| ); | ||
|
|
||
| let uniqueSlug = slug; | ||
| if (existingTasks.length > 0) { | ||
| const existingSlugs = new Set(existingTasks.map((t) => t.slug)); | ||
| let counter = 1; | ||
| while (existingSlugs.has(uniqueSlug)) { | ||
| uniqueSlug = `${slug}-${counter}`; | ||
| counter++; | ||
| } | ||
| } |
There was a problem hiding this comment.
Slug generation can yield empty or globally-duplicate slugs.
If the title slugifies to empty (e.g., "!!!"), you insert an empty slug. Also, the uniqueness check is org-scoped but the DB uniqueness is global, which can still raise conflicts.
🐛 Suggested fix
- const slug = params.title
+ const baseSlug = params.title
.toLowerCase()
.replace(/[^a-z0-9]+/g, "-")
.replace(/^-|-$/g, "")
.slice(0, 50);
+ const slug = baseSlug.length > 0 ? baseSlug : "task";
// Check for existing slug and make unique
const existingTasks = await db
.select({ slug: tasks.slug })
.from(tasks)
- .where(
- and(
- eq(tasks.organizationId, ctx.organizationId),
- ilike(tasks.slug, `${slug}%`),
- ),
- );
+ .where(ilike(tasks.slug, `${slug}%`));🤖 Prompt for AI Agents
In `@apps/api/src/lib/mcp/tools.ts` around lines 92 - 118, The slug generation can
produce an empty string and the uniqueness check only queries
organization-scoped slugs while the DB uniqueness is global; update the logic
around slug/uniqueSlug (the slug variable and the uniqueness loop that uses
existingTasks, existingSlugs, db and tasks) to: 1) provide a deterministic
fallback when slug === "" (e.g., "untitled" + short random/ts token) before any
DB checks, and 2) ensure uniqueness against global slugs by querying without the
org filter (or by catching a global unique-constraint DB error on insert) when
building existingTasks/existingSlugs so the while loop de-duplicates globally
rather than only per-organization.
| // Fall back to the most recently used workspace's project | ||
| if (!projectId) { | ||
| const sorted = [...workspaces].sort( | ||
| (a, b) => (b.lastOpenedAt ?? 0) - (a.lastOpenedAt ?? 0), | ||
| ); | ||
| projectId = sorted[0].projectId; | ||
| } | ||
|
|
||
| try { | ||
| const result = await ctx.createWorktree.mutateAsync({ | ||
| projectId, | ||
| name: params.name, | ||
| branchName: params.branchName, | ||
| baseBranch: params.baseBranch, | ||
| }); |
There was a problem hiding this comment.
Guard against unresolved projectId before calling the mutation.
If projectId is still null/undefined, fail fast with a clear error instead of invoking the mutation.
💡 Suggested fix
// Fall back to the most recently used workspace's project
if (!projectId) {
const sorted = [...workspaces].sort(
(a, b) => (b.lastOpenedAt ?? 0) - (a.lastOpenedAt ?? 0),
);
projectId = sorted[0].projectId;
}
+
+ if (!projectId) {
+ return { success: false, error: "No project available for workspace" };
+ }
try {
const result = await ctx.createWorktree.mutateAsync({
projectId,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Fall back to the most recently used workspace's project | |
| if (!projectId) { | |
| const sorted = [...workspaces].sort( | |
| (a, b) => (b.lastOpenedAt ?? 0) - (a.lastOpenedAt ?? 0), | |
| ); | |
| projectId = sorted[0].projectId; | |
| } | |
| try { | |
| const result = await ctx.createWorktree.mutateAsync({ | |
| projectId, | |
| name: params.name, | |
| branchName: params.branchName, | |
| baseBranch: params.baseBranch, | |
| }); | |
| // Fall back to the most recently used workspace's project | |
| if (!projectId) { | |
| const sorted = [...workspaces].sort( | |
| (a, b) => (b.lastOpenedAt ?? 0) - (a.lastOpenedAt ?? 0), | |
| ); | |
| projectId = sorted[0].projectId; | |
| } | |
| if (!projectId) { | |
| return { success: false, error: "No project available for workspace" }; | |
| } | |
| try { | |
| const result = await ctx.createWorktree.mutateAsync({ | |
| projectId, | |
| name: params.name, | |
| branchName: params.branchName, | |
| baseBranch: params.baseBranch, | |
| }); |
🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/routes/_authenticated/hooks/tools/create-worktree.ts`
around lines 32 - 46, After the fallback logic that computes projectId from
workspaces, add a guard that verifies projectId is defined before calling
ctx.createWorktree.mutateAsync; if projectId is still null/undefined, throw or
return a clear error (e.g., an Error with a descriptive message) so the mutation
is not invoked with an invalid projectId. Update the create-worktree flow around
the symbols projectId, workspaces, and ctx.createWorktree.mutateAsync to perform
this early check and fail fast with a readable error.
| const hash = window.location.hash; | ||
| const pathname = hash.startsWith("#") ? hash.slice(1) : hash; | ||
|
|
||
| // Parse workspace ID from route if present (route is /workspace/$workspaceId) | ||
| const workspaceMatch = pathname.match(/\/workspace\/([^/]+)/); | ||
| const currentWorkspaceId = workspaceMatch ? workspaceMatch[1] : null; |
There was a problem hiding this comment.
Strip query string before extracting the workspace ID.
If the hash contains a query string, the regex captures it as part of the ID.
💡 Suggested fix
const hash = window.location.hash;
const pathname = hash.startsWith("#") ? hash.slice(1) : hash;
+ const pathOnly = pathname.split("?")[0];
// Parse workspace ID from route if present (route is /workspace/$workspaceId)
- const workspaceMatch = pathname.match(/\/workspace\/([^/]+)/);
+ const workspaceMatch = pathOnly.match(/\/workspace\/([^/]+)/);🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/routes/_authenticated/hooks/tools/get-app-context.ts`
around lines 11 - 16, The code currently extracts currentWorkspaceId from
pathname which can include a query string; update the logic that computes
pathname (after removing the leading '#') to strip any query string or search
portion before running the workspace regex (e.g., trim everything from the first
'?' onward), then run the existing match (/\/workspace\/([^/]+)/) to set
workspaceMatch/currentWorkspaceId so the captured ID never includes query
parameters.
| // Validate params | ||
| const parsed = tool.schema.safeParse(params ?? {}); | ||
| if (!parsed.success) { | ||
| return { | ||
| success: false, | ||
| error: `Invalid params: ${parsed.error.errors.map((e: { message: string }) => e.message).join(", ")}`, | ||
| }; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Zod 4 safeParse error format issues vs errors compatibility
💡 Result:
In Zod v4, safeParse() still returns { success: false, error: ZodError }, but the error shape / helpers changed in ways that commonly break v3-era code and integrations:
1) error.errors (v3) is gone → use error.issues
In v3, ZodError.errors was an alias of ZodError.issues. In v4, ZodError.errors was removed, so code like result.error.errors will be undefined and crash. Use result.error.issues instead. [1]
Compatibility shim (if you can’t update all call sites yet):
function compatIssues(err: { issues: unknown[]; errors?: unknown[] }) {
return (err as any).errors ?? err.issues;
}2) .format() / .flatten() compatibility
Zod v4 deprecates the old “formatted/flattened” APIs in favor of z.treeifyError(). If your app/libraries expect the output shape of error.format() or error.flatten(), you’ll likely need to migrate that logic to treeifyError() (or keep v3 for those parts). [2]
3) Message / issue differences can change downstream “error formatting”
Even when you switch to .issues, you may still see different results because v4 changed default messages/issue generation in multiple places (for example, “Required” messaging and other validation behaviors), which can affect code that string-matches messages or assumes a single issue. [3]
Practical migration checklist
- Replace
result.error.errors→result.error.issues. [1] - If you relied on
format()/flatten(), migrate toz.treeifyError()output (or adapt it to your UI format). [2] - Avoid relying on exact default error messages; map issues by
code/pathinstead. [3]
Sources:
[1] Zod repo issue documenting ZodError.errors removal in v4 colinhacks/zod#5063
[2] Zod v4 migration notes: .format() / .flatten() deprecated; use z.treeifyError() https://deepwiki.com/colinhacks/zod/5.4-zod-4-release-and-migration
[3] Example of v4 default error-message behavior changing vs v3 colinhacks/zod#5032
🏁 Script executed:
# Find and check the package.json files to determine Zod version
find . -name "package.json" -type f | head -20 | xargs grep -l "zod" | head -5Repository: superset-sh/superset
Length of output: 196
🏁 Script executed:
# Read the specific file mentioned in the review
cat -n apps/desktop/src/renderer/routes/_authenticated/hooks/tools/index.ts | head -50Repository: superset-sh/superset
Length of output: 1867
🏁 Script executed:
# Check Zod version in the repository
grep -r "\"zod\":" apps/*/package.jsonRepository: superset-sh/superset
Length of output: 360
🏁 Script executed:
# Also check root package.json if it exists
if [ -f "package.json" ]; then grep "\"zod\":" package.json; fiRepository: superset-sh/superset
Length of output: 46
Replace parsed.error.errors with parsed.error.issues for Zod 4 compatibility.
The repository uses Zod 4.3.5, which removed the .errors property entirely. Line 45 will crash at runtime with "Cannot read property 'map' of undefined" whenever parameter validation fails. Use parsed.error.issues instead:
Fix
error: `Invalid params: ${parsed.error.errors.map((e: { message: string }) => e.message).join(", ")}`,Change to:
error: `Invalid params: ${parsed.error.issues.map((e: { message: string }) => e.message).join(", ")}`,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Validate params | |
| const parsed = tool.schema.safeParse(params ?? {}); | |
| if (!parsed.success) { | |
| return { | |
| success: false, | |
| error: `Invalid params: ${parsed.error.errors.map((e: { message: string }) => e.message).join(", ")}`, | |
| }; | |
| } | |
| // Validate params | |
| const parsed = tool.schema.safeParse(params ?? {}); | |
| if (!parsed.success) { | |
| return { | |
| success: false, | |
| error: `Invalid params: ${parsed.error.issues.map((e: { message: string }) => e.message).join(", ")}`, | |
| }; | |
| } |
🤖 Prompt for AI Agents
In `@apps/desktop/src/renderer/routes/_authenticated/hooks/tools/index.ts` around
lines 40 - 47, The validation error handling uses Zod's parsed from
tool.schema.safeParse(params ?? {}) and references parsed.error.errors which
doesn't exist in Zod 4; update the returned error construction to use
parsed.error.issues instead (i.e., replace references to parsed.error.errors
with parsed.error.issues when building the error message) so the code in the
parsed.success false branch (the safeParse result handling) works with Zod
4.3.5.
| "expo": "~54.0.31", | ||
| "expo-constants": "^18.0.13", | ||
| "expo-crypto": "^15.0.8", | ||
| "expo-dev-client": "~6.0.20", |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Expo SDK 54 expo-crypto compatible version compatibility matrix
💡 Result:
Expo SDK ↔ expo-crypto compatibility (bundled versions)
| Expo SDK | Use this expo-crypto version (via npx expo install expo-crypto) |
|---|---|
| 54.x | ~15.0.8 (docs.expo.dev) |
| 53.x | ~14.1.5 (docs.expo.dev) |
| 52.x | ~14.0.2 (docs.expo.dev) |
| 55.x | ~55.0.1 (docs.expo.dev) |
If you specifically mean Expo SDK 54, the compatible/bundled range is expo-crypto@~15.0.8. (docs.expo.dev)
Citations:
- 1: https://docs.expo.dev/versions/v54.0.0/sdk/crypto/
- 2: https://docs.expo.dev/versions/v53.0.0/sdk/crypto/
- 3: https://docs.expo.dev/versions/v52.0.0/sdk/crypto/
- 4: https://docs.expo.dev/versions/v55.0.0/sdk/crypto/
- 5: https://docs.expo.dev/versions/v54.0.0/sdk/crypto/
Change expo-crypto version to match Expo SDK 54 specification.
Official Expo SDK 54 documentation specifies expo-crypto@~15.0.8. The current ^15.0.8 allows minor/patch version flexibility that may introduce incompatibilities. Update to use tilde versioning for consistency with Expo's bundled version.
- "expo-crypto": "^15.0.8",
+ "expo-crypto": "~15.0.8",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "expo": "~54.0.31", | |
| "expo-constants": "^18.0.13", | |
| "expo-crypto": "^15.0.8", | |
| "expo-dev-client": "~6.0.20", | |
| "expo": "~54.0.31", | |
| "expo-constants": "^18.0.13", | |
| "expo-crypto": "~15.0.8", | |
| "expo-dev-client": "~6.0.20", |
🤖 Prompt for AI Agents
In `@apps/mobile/package.json` around lines 33 - 36, Update the expo-crypto
dependency in package.json from a caret range to the Expo SDK 54-compatible
tilde range: replace "expo-crypto": "^15.0.8" with "expo-crypto": "~15.0.8" so
the project uses the exact minor version aligned with SDK 54 (locate the
dependency entry "expo-crypto" in package.json and update its version string).
| "key_hash" text NOT NULL, | ||
| "default_device_id" text, | ||
| "last_used_at" timestamp, | ||
| "usage_count" text DEFAULT '0' NOT NULL, |
There was a problem hiding this comment.
Store usage_count as numeric, not text.
Keeping counts as text breaks numeric ordering and complicates atomic increments.
💡 Suggested fix
- "usage_count" text DEFAULT '0' NOT NULL,
+ "usage_count" bigint DEFAULT 0 NOT NULL,📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "usage_count" text DEFAULT '0' NOT NULL, | |
| "usage_count" bigint DEFAULT 0 NOT NULL, |
🤖 Prompt for AI Agents
In `@packages/db/drizzle/0012_add_mcp_actions_and_presence.sql` at line 31, The
migration defines "usage_count" as text but it should be numeric; update the
column definition in 0012_add_mcp_actions_and_presence.sql to use an integer (or
bigint if large counts expected) with DEFAULT 0 NOT NULL for proper numeric
ordering and atomic increments, and add a safe conversion for existing data
(CAST from text to integer) in the migration so existing rows with numeric
strings are migrated; also review any related INSERT/UPDATE statements or
application code that treat usage_count as text and change them to use numeric
operations (e.g., increments) against the usage_count column.
| // Device presence - tracks online devices for command routing | ||
| export const devicePresence = pgTable( | ||
| "device_presence", | ||
| { | ||
| id: uuid().primaryKey().defaultRandom(), | ||
| userId: uuid("user_id") | ||
| .notNull() | ||
| .references(() => users.id, { onDelete: "cascade" }), | ||
| organizationId: uuid("organization_id") | ||
| .notNull() | ||
| .references(() => organizations.id, { onDelete: "cascade" }), | ||
| deviceId: text("device_id").notNull(), | ||
| deviceName: text("device_name").notNull(), | ||
| deviceType: deviceType("device_type").notNull(), | ||
| lastSeenAt: timestamp("last_seen_at").notNull().defaultNow(), | ||
| createdAt: timestamp("created_at").notNull().defaultNow(), | ||
| }, | ||
| (table) => [ | ||
| index("device_presence_user_org_idx").on( | ||
| table.userId, | ||
| table.organizationId, | ||
| ), | ||
| uniqueIndex("device_presence_user_device_idx").on( | ||
| table.userId, | ||
| table.deviceId, | ||
| ), |
There was a problem hiding this comment.
Unique index should include organizationId to avoid cross-org collisions.
As written, the same user/device cannot appear in multiple orgs, which breaks multi-org presence tracking.
🐛 Suggested fix
- uniqueIndex("device_presence_user_device_idx").on(
- table.userId,
- table.deviceId,
- ),
+ uniqueIndex("device_presence_user_org_device_idx").on(
+ table.userId,
+ table.organizationId,
+ table.deviceId,
+ ),🤖 Prompt for AI Agents
In `@packages/db/src/schema/schema.ts` around lines 221 - 246, The unique index
defined on devicePresence (uniqueIndex("device_presence_user_device_idx"))
currently only covers table.userId and table.deviceId causing cross-organization
collisions; update that unique index to include table.organizationId as well
(and optionally rename the index to something like
"device_presence_user_org_device_idx") so the uniqueness is enforced per
(userId, organizationId, deviceId) within the devicePresence table.
| // Agent commands - synced via Electric SQL to executors | ||
| export const agentCommands = pgTable( | ||
| "agent_commands", | ||
| { | ||
| id: uuid().primaryKey().defaultRandom(), | ||
| userId: uuid("user_id") | ||
| .notNull() | ||
| .references(() => users.id, { onDelete: "cascade" }), | ||
| organizationId: uuid("organization_id") | ||
| .notNull() | ||
| .references(() => organizations.id, { onDelete: "cascade" }), | ||
| targetDeviceId: text("target_device_id"), | ||
| targetDeviceType: text("target_device_type"), | ||
| tool: text().notNull(), | ||
| params: jsonb().$type<Record<string, unknown>>(), | ||
| parentCommandId: uuid("parent_command_id"), | ||
| status: commandStatus().notNull().default("pending"), | ||
| claimedBy: text("claimed_by"), | ||
| claimedAt: timestamp("claimed_at"), | ||
| result: jsonb().$type<Record<string, unknown>>(), | ||
| error: text(), | ||
| createdAt: timestamp("created_at").notNull().defaultNow(), | ||
| executedAt: timestamp("executed_at"), | ||
| timeoutAt: timestamp("timeout_at"), | ||
| }, | ||
| (table) => [ | ||
| index("agent_commands_user_status_idx").on(table.userId, table.status), | ||
| index("agent_commands_target_device_status_idx").on( | ||
| table.targetDeviceId, | ||
| table.status, | ||
| ), | ||
| index("agent_commands_org_created_idx").on( | ||
| table.organizationId, | ||
| table.createdAt, | ||
| ), | ||
| ], |
There was a problem hiding this comment.
Add FK for parentCommandId and align targetDeviceType with enum.
Parent/child commands should be referentially safe, and device type should be constrained to the same enum used elsewhere to avoid drift.
🛠️ Proposed adjustments
- targetDeviceType: text("target_device_type"),
+ targetDeviceType: deviceType("target_device_type"),
tool: text().notNull(),
params: jsonb().$type<Record<string, unknown>>(),
- parentCommandId: uuid("parent_command_id"),
+ parentCommandId: uuid("parent_command_id").references(
+ () => agentCommands.id,
+ { onDelete: "set null" },
+ ),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Agent commands - synced via Electric SQL to executors | |
| export const agentCommands = pgTable( | |
| "agent_commands", | |
| { | |
| id: uuid().primaryKey().defaultRandom(), | |
| userId: uuid("user_id") | |
| .notNull() | |
| .references(() => users.id, { onDelete: "cascade" }), | |
| organizationId: uuid("organization_id") | |
| .notNull() | |
| .references(() => organizations.id, { onDelete: "cascade" }), | |
| targetDeviceId: text("target_device_id"), | |
| targetDeviceType: text("target_device_type"), | |
| tool: text().notNull(), | |
| params: jsonb().$type<Record<string, unknown>>(), | |
| parentCommandId: uuid("parent_command_id"), | |
| status: commandStatus().notNull().default("pending"), | |
| claimedBy: text("claimed_by"), | |
| claimedAt: timestamp("claimed_at"), | |
| result: jsonb().$type<Record<string, unknown>>(), | |
| error: text(), | |
| createdAt: timestamp("created_at").notNull().defaultNow(), | |
| executedAt: timestamp("executed_at"), | |
| timeoutAt: timestamp("timeout_at"), | |
| }, | |
| (table) => [ | |
| index("agent_commands_user_status_idx").on(table.userId, table.status), | |
| index("agent_commands_target_device_status_idx").on( | |
| table.targetDeviceId, | |
| table.status, | |
| ), | |
| index("agent_commands_org_created_idx").on( | |
| table.organizationId, | |
| table.createdAt, | |
| ), | |
| ], | |
| // Agent commands - synced via Electric SQL to executors | |
| export const agentCommands = pgTable( | |
| "agent_commands", | |
| { | |
| id: uuid().primaryKey().defaultRandom(), | |
| userId: uuid("user_id") | |
| .notNull() | |
| .references(() => users.id, { onDelete: "cascade" }), | |
| organizationId: uuid("organization_id") | |
| .notNull() | |
| .references(() => organizations.id, { onDelete: "cascade" }), | |
| targetDeviceId: text("target_device_id"), | |
| targetDeviceType: deviceType("target_device_type"), | |
| tool: text().notNull(), | |
| params: jsonb().$type<Record<string, unknown>>(), | |
| parentCommandId: uuid("parent_command_id").references( | |
| () => agentCommands.id, | |
| { onDelete: "set null" }, | |
| ), | |
| status: commandStatus().notNull().default("pending"), | |
| claimedBy: text("claimed_by"), | |
| claimedAt: timestamp("claimed_at"), | |
| result: jsonb().$type<Record<string, unknown>>(), | |
| error: text(), | |
| createdAt: timestamp("created_at").notNull().defaultNow(), | |
| executedAt: timestamp("executed_at"), | |
| timeoutAt: timestamp("timeout_at"), | |
| }, | |
| (table) => [ | |
| index("agent_commands_user_status_idx").on(table.userId, table.status), | |
| index("agent_commands_target_device_status_idx").on( | |
| table.targetDeviceId, | |
| table.status, | |
| ), | |
| index("agent_commands_org_created_idx").on( | |
| table.organizationId, | |
| table.createdAt, | |
| ), | |
| ], |
🤖 Prompt for AI Agents
In `@packages/db/src/schema/schema.ts` around lines 254 - 289, Add a
self-referencing foreign key for parentCommandId and constrain targetDeviceType
to the existing device-type enum: change parentCommandId definition in
agentCommands to reference the agentCommands.id with onDelete: "cascade" (e.g.,
parentCommandId: uuid("parent_command_id").references(() => agentCommands.id, {
onDelete: "cascade" })), and replace targetDeviceType:
text("target_device_type") with the project’s device-type enum column (e.g.,
targetDeviceType: deviceType("target_device_type") or the existing enum helper
used elsewhere) keeping nullable semantics as before so device type is validated
against the same enum used elsewhere.
| id: uuid().primaryKey().defaultRandom(), | ||
| userId: uuid("user_id") | ||
| .notNull() | ||
| .references(() => users.id, { onDelete: "cascade" }), | ||
| organizationId: uuid("organization_id") | ||
| .notNull() | ||
| .references(() => organizations.id, { onDelete: "cascade" }), | ||
| name: text().notNull(), | ||
| keyPrefix: text("key_prefix").notNull(), | ||
| keyHash: text("key_hash").notNull(), | ||
| defaultDeviceId: text("default_device_id"), | ||
| lastUsedAt: timestamp("last_used_at"), | ||
| usageCount: text("usage_count").notNull().default("0"), | ||
| createdAt: timestamp("created_at").notNull().defaultNow(), |
There was a problem hiding this comment.
Store usageCount as an integer, not text.
Text counts break numeric ordering and arithmetic (e.g., "10" < "2"). Use an integer column with numeric default.
💡 Suggested fix
- usageCount: text("usage_count").notNull().default("0"),
+ usageCount: integer("usage_count").notNull().default(0),📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| id: uuid().primaryKey().defaultRandom(), | |
| userId: uuid("user_id") | |
| .notNull() | |
| .references(() => users.id, { onDelete: "cascade" }), | |
| organizationId: uuid("organization_id") | |
| .notNull() | |
| .references(() => organizations.id, { onDelete: "cascade" }), | |
| name: text().notNull(), | |
| keyPrefix: text("key_prefix").notNull(), | |
| keyHash: text("key_hash").notNull(), | |
| defaultDeviceId: text("default_device_id"), | |
| lastUsedAt: timestamp("last_used_at"), | |
| usageCount: text("usage_count").notNull().default("0"), | |
| createdAt: timestamp("created_at").notNull().defaultNow(), | |
| id: uuid().primaryKey().defaultRandom(), | |
| userId: uuid("user_id") | |
| .notNull() | |
| .references(() => users.id, { onDelete: "cascade" }), | |
| organizationId: uuid("organization_id") | |
| .notNull() | |
| .references(() => organizations.id, { onDelete: "cascade" }), | |
| name: text().notNull(), | |
| keyPrefix: text("key_prefix").notNull(), | |
| keyHash: text("key_hash").notNull(), | |
| defaultDeviceId: text("default_device_id"), | |
| lastUsedAt: timestamp("last_used_at"), | |
| usageCount: integer("usage_count").notNull().default(0), | |
| createdAt: timestamp("created_at").notNull().defaultNow(), |
🤖 Prompt for AI Agents
In `@packages/db/src/schema/schema.ts` around lines 299 - 312, The usageCount
column is defined as text which prevents correct numeric ordering and
arithmetic; update the schema definition for usageCount (symbol: usageCount) to
use an integer column instead of text and set a numeric default (e.g., change
text("usage_count").notNull().default("0") to an integer column with .notNull()
and a numeric default like 0). Ensure any code that reads/writes usageCount
(column name "usage_count") expects an integer type after this change.
🧹 Preview Cleanup CompleteThe following preview resources have been cleaned up:
Thank you for your contribution! 🎉 |
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Fix all issues with AI agents
In `@apps/api/MCP_TOOLS.md`:
- Around line 204-224: The documentation is out of sync with the code: update
the docs to match the actual function and schema names used in code (or vice
versa). Specifically, replace references to get_current_workspace with
get_app_context, replace create_worktree with create_workspace, and change the
list_tasks output schema to use count instead of total (or update the
implementation to return total if you prefer). Ensure the zod schemas in the
docs (e.g., getCurrentWorkspaceInput/output and the list_tasks output) reflect
the exact field names and nullable types used by the functions get_app_context,
create_workspace, and list_tasks so names and fields match between docs and
implementation.
- Line 307: The docs reference a non-existent helper canRunTool(deviceType,
toolName); either remove that line or implement the helper and wire it into
routing: add a function named canRunTool(deviceType, toolName) that consults the
existing tool registry (e.g., DeviceToolRegistry or the module that maps device
types to supported tools) and returns a boolean, then call it from the device
routing logic (e.g., routeDeviceTool or DeviceRouter.handleRequest) to gate
routing decisions; update the docs if you remove the note or ensure the new
function name and signature match the documentation.
In `@packages/db/drizzle/0013_add_mcp_actions_and_presence.sql`:
- Around line 7-8: Change the "target_device_type" column on the agent_commands
table from plain text to the existing device_type enum to enforce valid values:
replace the column type definition "target_device_type" text with
"target_device_type" device_type in the migration where device_type is defined,
and update any accompanying INSERTs or test fixtures in this migration that set
target_device_type to use valid device_type enum values; if this migration has
already been applied in environments, provide an ALTER TABLE ... USING cast
migration to convert existing values to device_type safely.
In `@packages/trpc/src/router/agent/api-keys.ts`:
- Around line 265-269: The fire-and-forget DB update for apiKeys currently
swallows errors (.catch(() => {})); change the catch to log the error instead so
failures are observable — e.g. replace .catch(() => {}) on the
db.update(apiKeys).set({ lastUsedAt: new Date() }).where(eq(apiKeys.id,
found.id)) promise with .catch(err => <use available logger>.error('Failed to
update apiKeys.lastUsedAt for id=' + String(found.id), err)) or, if no logger is
in scope, .catch(err => console.error(...)); ensure you reference apiKeys,
db.update, eq(apiKeys.id, found.id) and found.id in the log message.
♻️ Duplicate comments (18)
apps/desktop/VOICE_AGENT_PLAN.md (2)
73-101: Critical issues remain unaddressed in AudioRecorder pseudo-code.The critical resource management and error handling issues previously identified in this pseudo-code remain present:
- Resource leak: MediaStream tracks are never stopped after
getUserMedia- Missing error handling:
getUserMediacan reject; no try-catch- No cancellation: No cleanup mechanism for component unmount
If this plan is implemented, these issues must be addressed as detailed in the previous review comments.
265-297: Critical resource leaks remain unaddressed in useVoiceCommand pseudo-code.The critical resource management issues previously identified remain present:
- Subscription leak: tRPC subscription never unsubscribed
- No unmount cleanup: AudioRecorder never disposed
- Missing import:
electronTrpcnot imported- No error handling: Async operations can fail silently
If this plan is implemented, these issues must be addressed as detailed in the previous review comments, particularly the need for
useEffectcleanup and subscription management.apps/desktop/src/renderer/routes/_authenticated/settings/api-keys/components/ApiKeysSettings/ApiKeysSettings.tsx (3)
110-117: Add confirmation before revoking API keys.Revoking an API key is irreversible. Consider adding a confirmation dialog to prevent accidental deletion.
119-123: Add error handling for clipboard API and extract magic number.
navigator.clipboard.writeText()can throw if permissions are denied. The timeout duration2000should be extracted to a named constant per coding guidelines.
259-261: Prevent double submission on Enter key.The Enter key handler should check
isGeneratingto prevent rapid double submissions.packages/db/src/schema/schema.ts (3)
281-284: Unique index should include organizationId to avoid cross-org collisions.The current unique index on
(userId, deviceId)prevents the same user/device from appearing in multiple organizations, breaking multi-org presence tracking.
304-307: Add FK for parentCommandId and align targetDeviceType with enum.
parentCommandIdshould referenceagentCommands.idfor referential integrity, andtargetDeviceTypeshould use thedeviceTypeenum for consistency.
349-349: Store usageCount as an integer, not text.Text counts break numeric ordering and arithmetic (e.g.,
"10" < "2"). Use an integer column with numeric default.apps/desktop/src/renderer/routes/_authenticated/providers/CollectionsProvider/collections.ts (1)
216-231: Fix inconsistent and unreliabletxidhandling in agentCommands collection.The
Date.now()fallback on line 219 andNumber()conversion on line 230 create fragile type handling. The server should return proper transaction IDs viagetCurrentTxid(tx)like other routers.packages/db/drizzle/0013_add_mcp_actions_and_presence.sql (1)
31-31: Storeusage_countas numeric, not text.Keeping counts as text breaks numeric ordering (e.g., "10" < "9") and complicates atomic increments. Use
bigint DEFAULT 0 NOT NULLinstead.apps/api/src/lib/mcp/auth.ts (1)
12-23: Wrap API key validation to prevent unhandled exceptions.
validateApiKeyperforms database queries that can fail. If an exception occurs, it will propagate uncaught and result in a 500 error with no context. Per coding guidelines, errors should be logged with context.💡 Suggested fix
export async function authenticateMcpRequest( request: Request, ): Promise<McpContext | null> { const apiKey = request.headers.get("X-API-Key"); if (!apiKey) { return null; } - const context = await validateApiKey(apiKey); - return context; + try { + return await validateApiKey(apiKey); + } catch (error) { + console.error("[mcp/auth] API key validation failed:", error); + return null; + } }packages/trpc/src/router/agent/agent.ts (1)
43-48: Add authorization guard before updating commands.Currently any authenticated org member can update any command in that organization. Consider restricting updates to the command creator or the claiming device to prevent cross-user tampering.
🔒 Suggested guard
if (!existingCommand) { throw new TRPCError({ code: "NOT_FOUND", message: "Command not found", }); } + + // Only allow the command creator or the claiming device to update + if (existingCommand.userId !== ctx.session.user.id && + existingCommand.claimedBy !== ctx.session.user.id) { + throw new TRPCError({ + code: "FORBIDDEN", + message: "Not authorized to update this command", + }); + }apps/api/src/lib/mcp/tools.ts (4)
22-22: Refactor to use params object pattern.Per coding guidelines, functions with 2+ parameters should accept a single object parameter. This was flagged in a previous review.
Suggested change
-export function registerMcpTools(server: McpServer, ctx: McpContext) { +export function registerMcpTools({ + server, + ctx, +}: { server: McpServer; ctx: McpContext }) {
93-118: Slug generation can yield empty or globally-duplicate slugs.If the title slugifies to empty (e.g., "!!!"), an empty slug is inserted. The uniqueness check is org-scoped but the DB constraint is global, which can still raise conflicts. This was flagged previously.
Suggested fix
- const slug = params.title + const baseSlug = params.title .toLowerCase() .replace(/[^a-z0-9]+/g, "-") .replace(/^-|-$/g, "") .slice(0, 50); + const slug = baseSlug.length > 0 ? baseSlug : "task"; // Check for existing slug and make unique const existingTasks = await db .select({ slug: tasks.slug }) .from(tasks) - .where( - and( - eq(tasks.organizationId, ctx.organizationId), - ilike(tasks.slug, `${slug}%`), - ), - ); + .where(ilike(tasks.slug, `${slug}%`));
213-235: Add validation to reject empty updates.All update fields are optional, allowing a request with only
taskIdto reach.set({})which Drizzle rejects. This was flagged previously.Suggested guard
if (params.estimate !== undefined) updateData.estimate = params.estimate; + if (Object.keys(updateData).length === 0) { + return { + content: [{ type: "text", text: "Error: No fields provided to update" }], + isError: true, + }; + } + const result = await dbWs.transaction(async (tx) => {
858-867: ForwardtaskIdto the device command.
taskIdis accepted by the tool schema but dropped before dispatch. This was flagged previously.Suggested fix
return executeOnDevice({ ctx, deviceId: targetDeviceId, tool: "create_workspace", params: { name: params.name, branchName: params.branchName, baseBranch: params.baseBranch, + taskId: params.taskId, }, });apps/desktop/src/renderer/routes/_authenticated/hooks/tools/get-app-context.ts (1)
11-16: Strip query string before extracting workspace ID.If the hash contains a query string (e.g.,
#/workspace/abc123?tab=details), the regex captures it as part of the ID. This was flagged previously.Suggested fix
const hash = window.location.hash; - const pathname = hash.startsWith("#") ? hash.slice(1) : hash; + const rawPath = hash.startsWith("#") ? hash.slice(1) : hash; + const pathname = rawPath.split("?")[0]; // Parse workspace ID from route if present (route is /workspace/$workspaceId) const workspaceMatch = pathname.match(/\/workspace\/([^/]+)/);apps/desktop/src/renderer/routes/_authenticated/hooks/tools/index.ts (1)
40-46: Zod v4 usesissues, noterrors.
parsed.error.errorsno longer exists in Zod 4; useparsed.error.issuesinstead.
🧹 Nitpick comments (23)
apps/desktop/VOICE_AGENT_PLAN.md (1)
7-45: Consider adding language identifiers to fenced code blocks.The fenced code blocks containing ASCII diagrams lack language specifiers. While these are diagrams rather than code, adding
textorplaintextidentifiers would satisfy linting rules and improve consistency.📝 Suggested fix
Line 7:
-``` +```textLine 301:
-``` +```textAlso applies to: 301-312
apps/api/package.json (1)
16-16: Ensure ESM/CJS configuration and be aware of sampling validation schema change in v1.25.3.This SDK is already on the latest version (1.25.3, released Jan 20, 2026) and has no breaking changes. However, proper module configuration is required:
- Ensure
package.jsoncontains"type": "module"to use the SDK's ESM exports.- In
tsconfig.json, set"module": "Node16"(or"moduleNext") to correctly resolve SDK subpaths. AvoidmoduleResolution: "bundler"which causes import failures.- Node.js requirement: 18+.
Note: v1.25.3 fixes client sampling validation to use the correct schema when tools are present—if your code relied on the previous validation behavior, you may see new validation failures.
Version pinning (changing
^1.25.3to1.25.3) is optional since 1.25.3 is already the latest stable release.apps/mobile/screens/(authenticated)/index/HomeScreen.tsx (3)
19-28: Consider inferring the type from tRPC for type safety.The
OnlineDeviceinterface duplicates the server-side return type. Using tRPC's inference keeps client and server types in sync and reduces maintenance burden.♻️ Suggested approach
import type { inferRouterOutputs } from "@trpc/server"; import type { AppRouter } from "@superset/trpc"; // adjust import path type RouterOutputs = inferRouterOutputs<AppRouter>; type OnlineDevice = RouterOutputs["device"]["listOnlineDevices"][number];
49-53: Extract the polling interval to a named constant.Per coding guidelines, magic numbers should be extracted to named constants at module top for clarity and maintainability.
♻️ Suggested refactor
+const DEVICE_POLL_INTERVAL_MS = 10_000; + export function HomeScreen() { // ... useEffect(() => { fetchDevices(); - const interval = setInterval(fetchDevices, 10_000); + const interval = setInterval(fetchDevices, DEVICE_POLL_INTERVAL_MS); return () => clearInterval(interval); }, [fetchDevices]);
55-61: Avoid redundantDateobject creation.The function creates a
Dateobject from the input twice. Parse once and reuse for both calculations.♻️ Suggested optimization
const formatLastSeen = (date: Date) => { - const seconds = Math.floor((Date.now() - new Date(date).getTime()) / 1000); + const parsedDate = new Date(date); + const seconds = Math.floor((Date.now() - parsedDate.getTime()) / 1000); if (seconds < 60) return `${seconds}s ago`; const minutes = Math.floor(seconds / 60); if (minutes < 60) return `${minutes}m ago`; - return new Date(date).toLocaleTimeString(); + return parsedDate.toLocaleTimeString(); };apps/desktop/src/renderer/routes/_authenticated/settings/devices/components/DevicesSettings/DevicesSettings.tsx (2)
44-49: Extract refresh interval to a named constant and avoid loading flicker on refresh.The magic number
10_000should be extracted per coding guidelines. Also, settingloading=trueon every interval refresh causes UI flickering when the list is already populated.♻️ Suggested fix
+const DEVICE_REFRESH_INTERVAL_MS = 10_000; + export function DevicesSettings() { const [devices, setDevices] = useState<OnlineDevice[]>([]); const [loading, setLoading] = useState(true); const [error, setError] = useState<string | null>(null); - const fetchDevices = useCallback(async () => { + const fetchDevices = useCallback(async (isInitialLoad = false) => { try { - setLoading(true); + if (isInitialLoad) setLoading(true); setError(null); const result = await apiTrpcClient.device.listOnlineDevices.query(); setDevices(result); } catch (err) { setError(err instanceof Error ? err.message : "Failed to fetch devices"); } finally { - setLoading(false); + if (isInitialLoad) setLoading(false); } }, []); useEffect(() => { - fetchDevices(); - // Refresh every 10 seconds - const interval = setInterval(fetchDevices, 10_000); + fetchDevices(true); + const interval = setInterval(() => fetchDevices(false), DEVICE_REFRESH_INTERVAL_MS); return () => clearInterval(interval); }, [fetchDevices]);
111-117: Use Button component for consistency.Consider using
Buttonfrom@superset/uiwithvariant="link"orvariant="ghost"for consistency with other settings pages.♻️ Suggested fix
+import { Button } from "@superset/ui/button"; + - <button - type="button" + <Button + variant="link" onClick={fetchDevices} - className="mt-4 text-sm text-muted-foreground hover:text-foreground underline" + className="mt-4 text-sm" > Refresh now - </button> + </Button>packages/trpc/src/router/agent/api-keys.ts (1)
99-103: Consider throwing BAD_REQUEST for missing organization.The
listprocedure returns[]when no organization is selected, while other procedures (generate,revoke,update) throw errors. This inconsistency could mask configuration issues.💡 Optional: consistent error handling
list: protectedProcedure.query(async ({ ctx }) => { const organizationId = ctx.session.session.activeOrganizationId; if (!organizationId) { - return []; + throw new TRPCError({ + code: "BAD_REQUEST", + message: "No active organization selected", + }); }apps/api/src/lib/mcp/tools.ts (3)
299-307: Consider escaping LIKE pattern characters in search input.Special characters
%and_inparams.searchare not escaped, allowing users to manipulate pattern matching (e.g., searching%matches everything). While not a security issue with Drizzle's parameterized queries, it may produce unexpected results.Suggested helper
+function escapeLikePattern(input: string): string { + return input.replace(/[%_\\]/g, "\\$&"); +} + if (params.search) { + const escaped = escapeLikePattern(params.search); const searchCondition = or( - ilike(tasks.title, `%${params.search}%`), - ilike(tasks.description, `%${params.search}%`), + ilike(tasks.title, `%${escaped}%`), + ilike(tasks.description, `%${escaped}%`), );
1036-1068: Consider exponential backoff for the polling loop.The fixed 500ms interval polls up to 60 times over 30 seconds. An exponential backoff (e.g., starting at 100ms, capping at 2s) would reduce database load for longer-running commands while remaining responsive for quick ones.
672-702: Consider extracting a helper for device-routed tools.Multiple device tools share identical deviceId resolution and error handling. A helper would reduce repetition.
Example helper pattern
function createDeviceTool<T extends z.ZodType>( server: McpServer, name: string, description: string, schema: T, toolName: string, getParams: (params: z.infer<T>) => Record<string, unknown>, ctx: McpContext, ) { server.tool(name, description, schema, async (params) => { const targetDeviceId = params.deviceId ?? ctx.defaultDeviceId; if (!targetDeviceId) { return { content: [{ type: "text", text: "Error: No device specified and no default device configured" }], isError: true, }; } return executeOnDevice({ ctx, deviceId: targetDeviceId, tool: toolName, params: getParams(params) }); }); }apps/desktop/src/renderer/routes/_authenticated/hooks/tools/list-projects.ts (1)
4-4: Enforce strict no‑arg validation.
z.object({})will silently strip unknown params; for agent tools it’s safer to reject extras. Considerz.strictObject({})(Zod 4) or.strict().💡 Suggested change
-const schema = z.object({}); +const schema = z.strictObject({});apps/desktop/src/renderer/routes/_authenticated/hooks/tools/delete-workspace.ts (1)
22-27: Log deletion failures with context.The catch block returns an error but doesn’t log it. Add a prefixed log to aid debugging while avoiding PII.
💡 Suggested change
} catch (error) { + console.error("[tools/delete_workspace] Failed to delete workspace", error); return { success: false, error: error instanceof Error ? error.message : "Failed to delete workspace", }; }As per coding guidelines, never swallow errors silently and use prefixed logs.
apps/desktop/src/renderer/routes/_authenticated/hooks/tools/list-workspaces.ts (2)
4-4: Enforce strict no‑arg validation.
z.object({})strips unknown params; preferz.strictObject({})(or.strict()) to reject extras.💡 Suggested change
-const schema = z.object({}); +const schema = z.strictObject({});
16-19: Avoid double casting the workspace list.
as unknown as Record<string, unknown>[]hides shape issues and can mask accidental schema changes. Prefer mapping to an explicit DTO or makingCommandResultgeneric so the typed array can pass through.apps/desktop/src/renderer/routes/_authenticated/hooks/tools/index.ts (2)
11-12: Replaceanyin the tool registry.Use
ZodTypeAny(or similar) to keep type safety withoutany.💡 Suggested change
+import type { ZodTypeAny } from "zod"; // biome-ignore lint/suspicious/noExplicitAny: Tool schemas vary -const tools: ToolDefinition<any>[] = [ +const tools: ToolDefinition<ZodTypeAny>[] = [As per coding guidelines, avoid
anytypes.
29-33: Prefer object params forexecuteTool.Guidelines call for object parameters when there are 2+ inputs. This also makes call sites clearer as tools grow.
As per coding guidelines, use object parameters for functions with 2+ parameters.
apps/desktop/src/main/lib/device-info.ts (1)
8-42: Log machine-ID retrieval failures (without PII).The catch block is silent. Log with a prefixed message while avoiding the raw ID or hostname.
💡 Suggested change
-} catch { - // Fallback if platform-specific method fails -} +} catch (error) { + console.warn("[device-info/getRawMachineId] Platform lookup failed", error); + // Fallback if platform-specific method fails +}As per coding guidelines, never swallow errors silently and use prefixed logs.
apps/desktop/src/renderer/routes/_authenticated/hooks/useCommandWatcher.ts (3)
76-81: Use object params forprocessCommand.This function has 3 positional params; consider a single object param for clarity and extensibility.
As per coding guidelines, use object parameters for functions with 2+ parameters.
94-103: Extract the 100ms delays into a constant.The repeated
100should be a named constant to document intent and avoid magic numbers.💡 Suggested change
+const COMMAND_STATUS_SETTLE_MS = 100; ... - await new Promise((resolve) => setTimeout(resolve, 100)); + await new Promise((resolve) => setTimeout(resolve, COMMAND_STATUS_SETTLE_MS)); ... - await new Promise((resolve) => setTimeout(resolve, 100)); + await new Promise((resolve) => setTimeout(resolve, COMMAND_STATUS_SETTLE_MS));As per coding guidelines, extract magic numbers to named constants.
165-167: If command order matters, await execution.
processCommandis fired without awaiting, so commands run concurrently. If ordering or resource contention matters, consider sequencing (e.g., queue orfor...ofwith await inside an async IIFE).apps/desktop/src/renderer/routes/_authenticated/hooks/tools/types.ts (2)
5-9: Consider a discriminated union forCommandResult.Right now
success,data, anderrorcan co-exist in any combination, which weakens type safety. A discriminated union makes success/error states unambiguous.♻️ Proposed refactor
-export interface CommandResult { - success: boolean; - data?: Record<string, unknown>; - error?: string; -} +export type CommandResult = + | { success: true; data?: Record<string, unknown> } + | { success: false; error: string };
26-30: Refactorexecuteto accept a single object parameter to align with project guidelines.The
executefunction has 2 parameters and should follow the patternexecute({ params, ctx }). This requires updating all 7 tool implementations (create-worktree, delete-workspace, get-app-context, list-projects, list-workspaces, navigate-to-workspace, switch-workspace) and the single call site in index.ts line 49.♻️ Proposed interface refactor
-export interface ToolDefinition<T extends z.ZodType> { - name: string; - schema: T; - execute: (params: z.infer<T>, ctx: ToolContext) => Promise<CommandResult>; -} +export interface ToolDefinition<T extends z.ZodType> { + name: string; + schema: T; + execute: (args: { params: z.infer<T>; ctx: ToolContext }) => Promise<CommandResult>; +}Update the call site (index.ts line 49) from
tool.execute(parsed.data, ctx)totool.execute({ params: parsed.data, ctx }), and refactor each tool's execute function signature accordingly.
| #### `get_current_workspace` | ||
| Get the currently active workspace on a device. | ||
|
|
||
| ```typescript | ||
| const getCurrentWorkspaceInput = z.object({ | ||
| deviceId: z.string().optional(), | ||
| }); | ||
|
|
||
| const getCurrentWorkspaceOutput = z.object({ | ||
| workspace: z.object({ | ||
| id: z.string().uuid(), | ||
| name: z.string(), | ||
| path: z.string(), | ||
| branch: z.string(), | ||
| repositoryId: z.string().uuid().nullable(), | ||
| // Additional context | ||
| uncommittedChanges: z.number().int(), | ||
| currentTask: taskSchema.nullable(), | ||
| }).nullable(), | ||
| }); | ||
| ``` |
There was a problem hiding this comment.
Documentation doesn't match implementation.
The docs reference get_current_workspace but the implementation uses get_app_context. Similarly, create_worktree in docs vs create_workspace in code. The list_tasks output schema shows total but the implementation returns count.
🤖 Prompt for AI Agents
In `@apps/api/MCP_TOOLS.md` around lines 204 - 224, The documentation is out of
sync with the code: update the docs to match the actual function and schema
names used in code (or vice versa). Specifically, replace references to
get_current_workspace with get_app_context, replace create_worktree with
create_workspace, and change the list_tasks output schema to use count instead
of total (or update the implementation to return total if you prefer). Ensure
the zod schemas in the docs (e.g., getCurrentWorkspaceInput/output and the
list_tasks output) reflect the exact field names and nullable types used by the
functions get_app_context, create_workspace, and list_tasks so names and fields
match between docs and implementation.
| ## Implementation Notes | ||
|
|
||
| 1. **Validation**: All inputs validated with Zod, converted to JSON Schema for MCP | ||
| 2. **Device routing**: Device tools check `canRunTool(deviceType, toolName)` before routing |
There was a problem hiding this comment.
Implementation note references non-existent function.
"Device tools check canRunTool(deviceType, toolName) before routing" — this function doesn't exist in the implementation. Either add it or remove this note.
🤖 Prompt for AI Agents
In `@apps/api/MCP_TOOLS.md` at line 307, The docs reference a non-existent helper
canRunTool(deviceType, toolName); either remove that line or implement the
helper and wire it into routing: add a function named canRunTool(deviceType,
toolName) that consults the existing tool registry (e.g., DeviceToolRegistry or
the module that maps device types to supported tools) and returns a boolean,
then call it from the device routing logic (e.g., routeDeviceTool or
DeviceRouter.handleRequest) to gate routing decisions; update the docs if you
remove the note or ensure the new function name and signature match the
documentation.
| "target_device_id" text, | ||
| "target_device_type" text, |
There was a problem hiding this comment.
Consider using device_type enum for target_device_type column.
The device_type enum is defined at line 2, but target_device_type on agent_commands is defined as plain text. This allows invalid device types to be stored and loses the benefit of enum validation.
💡 Suggested fix
- "target_device_type" text,
+ "target_device_type" "device_type",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "target_device_id" text, | |
| "target_device_type" text, | |
| "target_device_id" text, | |
| "target_device_type" "device_type", |
🤖 Prompt for AI Agents
In `@packages/db/drizzle/0013_add_mcp_actions_and_presence.sql` around lines 7 -
8, Change the "target_device_type" column on the agent_commands table from plain
text to the existing device_type enum to enforce valid values: replace the
column type definition "target_device_type" text with "target_device_type"
device_type in the migration where device_type is defined, and update any
accompanying INSERTs or test fixtures in this migration that set
target_device_type to use valid device_type enum values; if this migration has
already been applied in environments, provide an ALTER TABLE ... USING cast
migration to convert existing values to device_type safely.
| // Update last used timestamp (fire and forget) | ||
| db.update(apiKeys) | ||
| .set({ lastUsedAt: new Date() }) | ||
| .where(eq(apiKeys.id, found.id)) | ||
| .catch(() => {}); // Ignore errors |
There was a problem hiding this comment.
Log errors instead of silently swallowing them.
Per coding guidelines, errors should never be swallowed silently. Even for fire-and-forget operations, logging failures helps with debugging and monitoring.
💡 Suggested fix
// Update last used timestamp (fire and forget)
db.update(apiKeys)
.set({ lastUsedAt: new Date() })
.where(eq(apiKeys.id, found.id))
- .catch(() => {}); // Ignore errors
+ .catch((error) => {
+ console.error("[api-keys/validateApiKey] Failed to update lastUsedAt:", error);
+ });🤖 Prompt for AI Agents
In `@packages/trpc/src/router/agent/api-keys.ts` around lines 265 - 269, The
fire-and-forget DB update for apiKeys currently swallows errors (.catch(() =>
{})); change the catch to log the error instead so failures are observable —
e.g. replace .catch(() => {}) on the db.update(apiKeys).set({ lastUsedAt: new
Date() }).where(eq(apiKeys.id, found.id)) promise with .catch(err => <use
available logger>.error('Failed to update apiKeys.lastUsedAt for id=' +
String(found.id), err)) or, if no logger is in scope, .catch(err =>
console.error(...)); ensure you reference apiKeys, db.update, eq(apiKeys.id,
found.id) and found.id in the log message.
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@apps/api/src/lib/mcp/tools.ts`:
- Around line 1033-1074: When marking the command as "timeout" after the poll
loop, avoid clobbering a command that just completed; change the final
db.update(agentCommands).set({ status: "timeout" }).where(eq(agentCommands.id,
cmd.id)) to include an additional guard so the update only applies to in‑flight
statuses (e.g., add conditions that status is not "completed", not "failed", and
not already "timeout" using the same query builder operators like eq/neq or a
notIn check against agentCommands.status and cmd.id) so that completed/failed
results are preserved.
In
`@apps/desktop/src/renderer/routes/_authenticated/hooks/tools/switch-workspace.ts`:
- Around line 27-31: The find predicate can throw when ws.branch is
null/undefined because it calls ws.branch.toLowerCase(); update the predicate
used in workspaces.find to defensively handle nullable branch and name by
comparing lowercased values using safe checks (e.g., use
ws.branch?.toLowerCase() or (ws.branch ?? '').toLowerCase() and same for
ws.name) against searchName so the comparison never calls .toLowerCase() on
null/undefined; keep the symbol references workspaces.find, ws.branch, ws.name,
and searchName when making the change.
In `@apps/desktop/VOICE_AGENT_PLAN.md`:
- Around line 210-224: transcribeAudio currently assumes fetch succeeds and
returns JSON; update it to handle non-200 statuses and timeouts: create an
AbortController with a configurable timeout, pass its signal to
fetch(WISPRFLOW_API), check response.ok and map common status codes to
meaningful error messages, parse JSON only on ok responses, and throw or return
a structured error when status !== 200; also include getStoredApiKey and
WISPRFLOW_API in the error context so callers can log which key/endpoint failed.
- Around line 301-312: The fenced code block containing the directory tree (the
block that starts with ``` and lists apps/desktop/src/ and its subfolders) is
missing a language specifier which triggers MD040; update that opening fence to
include a language (for example ```text or ```bash) so the code block has an
explicit language and the Markdown linter stops flagging it.
- Around line 7-45: The fenced ASCII diagram block in VOICE_AGENT_PLAN.md (the
code block starting with the triple backticks that contains the "RENDERER
PROCESS" / "MAIN PROCESS" / "CLAUDE CODE CLI" diagram) must include a language
identifier to satisfy markdownlint MD040; update the opening fence from ``` to
something like ```text (or ```plain) so the diagram is treated as a specific
language and the linter warning is resolved.
♻️ Duplicate comments (21)
apps/desktop/VOICE_AGENT_PLAN.md (2)
73-95: AudioRecorder needs cleanup + error handling.The plan omits MediaStream cleanup, cancellation, and error handling in
start()/stop(). This will leak the mic stream and can leave recording stuck on failures.
265-289: Subscription/unmount cleanup is required.The hook plan doesn’t unsubscribe from the tRPC stream or dispose the recorder on unmount, which will leak resources and duplicate streams over time.
apps/desktop/src/renderer/routes/_authenticated/hooks/useDevicePresence.ts (1)
16-24: Log heartbeat failures instead of swallowing.Same concern as earlier review: silent catches hide connectivity/auth issues. Please log with a contextual prefix (and device info if available). As per coding guidelines, log errors with context.
💡 Suggested change
- } catch { - // Heartbeat can fail when offline - ignore - } + } catch (error) { + console.warn( + "[devicePresence/heartbeat] failed", + { + deviceId: deviceInfo.deviceId, + deviceName: deviceInfo.deviceName, + }, + error, + ); + }apps/desktop/src/renderer/routes/_authenticated/providers/CollectionsProvider/collections.ts (1)
216-231: Inconsistent txid handling remains unaddressed.As flagged in the previous review, the
onUpdatehandler has problematic txid handling:
- Line 219: Returns
Date.now()as a fabricated txid when status hasn't changed- Line 230: Converts server's BigInt txid to Number, risking precision loss
This breaks Electric SQL sync semantics and should be addressed server-side by wrapping the update in a transaction and returning a proper txid via
getCurrentTxid(tx).apps/desktop/src/renderer/routes/_authenticated/settings/api-keys/components/ApiKeysSettings/ApiKeysSettings.tsx (3)
110-117: Add confirmation before revoking API keys.Revoking is irreversible—the key is cryptographically hashed and cannot be recovered. As flagged previously, add a confirmation dialog to prevent accidental deletion.
Proposed fix with confirmation
const handleRevokeKey = async (id: string) => { + const confirmed = window.confirm( + "Are you sure you want to revoke this API key? This action cannot be undone.", + ); + if (!confirmed) return; + try { await apiTrpcClient.apiKeys.revoke.mutate({ id }); loadApiKeys(); } catch (error) { console.error("[api-keys] Failed to revoke API key:", error); } };
119-123: Add error handling for clipboard API and extract magic number.As flagged previously,
navigator.clipboard.writeText()can throw if permissions are denied. The timeout duration should also be extracted to a named constant.Proposed fix
+const COPY_FEEDBACK_DURATION_MS = 2000; + // Inside component: -const handleCopyKey = () => { - navigator.clipboard.writeText(newKeyValue); - setCopied(true); - setTimeout(() => setCopied(false), 2000); +const handleCopyKey = async () => { + try { + await navigator.clipboard.writeText(newKeyValue); + setCopied(true); + setTimeout(() => setCopied(false), COPY_FEEDBACK_DURATION_MS); + } catch (error) { + console.error("[api-keys] Failed to copy key to clipboard:", error); + } };
259-261: Prevent double submission on Enter key.As flagged previously, the Enter key handler should check
isGeneratingto prevent rapid double submissions.Proposed fix
onKeyDown={(e) => { - if (e.key === "Enter") handleGenerateKey(); + if (e.key === "Enter" && !isGenerating) handleGenerateKey(); }}apps/desktop/src/renderer/routes/_authenticated/hooks/tools/get-app-context.ts (1)
11-16: Strip query string before extracting workspace ID.If the hash contains a query string (e.g.,
#/workspace/abc123?tab=settings), the regex will capture it as part of the ID, resulting inabc123?tab=settingsinstead ofabc123.Suggested fix
const hash = window.location.hash; const pathname = hash.startsWith("#") ? hash.slice(1) : hash; + const pathOnly = pathname.split("?")[0]; // Parse workspace ID from route if present (route is /workspace/$workspaceId) - const workspaceMatch = pathname.match(/\/workspace\/([^/]+)/); + const workspaceMatch = pathOnly.match(/\/workspace\/([^/]+)/);apps/desktop/src/renderer/routes/_authenticated/hooks/tools/index.ts (1)
40-47: Replaceparsed.error.errorswithparsed.error.issuesfor Zod 4 compatibility.The repository uses Zod 4.3.5, which removed the
.errorsproperty. This will crash at runtime withCannot read property 'map' of undefinedwhen parameter validation fails.Fix
- error: `Invalid params: ${parsed.error.errors.map((e: { message: string }) => e.message).join(", ")}`, + error: `Invalid params: ${parsed.error.issues.map((e) => e.message).join(", ")}`,apps/api/src/lib/mcp/tools.ts (4)
19-23: Refactor registerMcpTools to accept a single params object.This aligns with the project’s TS style rule for functions with 2+ params. As per coding guidelines.
92-118: Slug generation can be empty and collide globally.
tasks.slugis globally unique; org‑scoped lookup can still collide, and titles like "!!!" can produce an empty slug. Add a non‑empty fallback and ensure global uniqueness.🐛 Suggested fix
- const slug = params.title + const baseSlug = params.title .toLowerCase() .replace(/[^a-z0-9]+/g, "-") .replace(/^-|-$/g, "") .slice(0, 50); + const slug = baseSlug.length > 0 ? baseSlug : "task"; // Check for existing slug and make unique const existingTasks = await db .select({ slug: tasks.slug }) .from(tasks) - .where( - and( - eq(tasks.organizationId, ctx.organizationId), - ilike(tasks.slug, `${slug}%`), - ), - ); + .where(ilike(tasks.slug, `${slug}%`));
213-230: Reject empty updates before calling.set({}).All fields are optional; a request with only
taskIdcan produce an empty update object, which many Drizzle versions reject.💡 Suggested fix
if (params.dueDate !== undefined) updateData.dueDate = params.dueDate ? new Date(params.dueDate) : null; if (params.estimate !== undefined) updateData.estimate = params.estimate; + + if (Object.keys(updateData).length === 0) { + return { + content: [{ type: "text", text: "Error: No fields provided to update" }], + isError: true, + }; + }In Drizzle ORM, what happens when update().set({}) is called? Does it throw or generate invalid SQL?
821-867: ForwardtaskIdto the device command.The tool accepts
taskIdbut drops it before dispatch, so the desktop can’t associate the workspace with the task.🐛 Suggested fix
return executeOnDevice({ ctx, deviceId: targetDeviceId, tool: "create_workspace", params: { name: params.name, branchName: params.branchName, baseBranch: params.baseBranch, + taskId: params.taskId, }, });apps/desktop/src/renderer/routes/_authenticated/hooks/tools/create-worktree.ts (1)
32-46: Guard against unresolved projectId before mutation.Even with workspaces present, the fallback could still yield a missing projectId; avoid calling the mutation with invalid input and return a clear error instead.
💡 Suggested fix
// Fall back to the most recently used workspace's project if (!projectId) { const sorted = [...workspaces].sort( (a, b) => (b.lastOpenedAt ?? 0) - (a.lastOpenedAt ?? 0), ); projectId = sorted[0].projectId; } + + if (!projectId) { + return { success: false, error: "No project available for workspace" }; + } try { const result = await ctx.createWorktree.mutateAsync({ projectId,apps/api/MCP_TOOLS.md (3)
84-88: Docs mismatch: list_tasks output usestotalbut implementation returnscount.Align the docs or the implementation so the field name matches.
204-246: Docs mismatch: tool names differ from implementation.Docs reference
get_current_workspaceandcreate_worktree, while the implementation usesget_app_contextandcreate_workspace. Please align the names.
306-308: Implementation note references a non-existent helper.
canRunTool(deviceType, toolName)isn’t present in code; either remove the note or add the helper.apps/api/src/lib/mcp/auth.ts (1)
64-74: Wrap API key validation to prevent unhandled exceptions from bubbling up with no logging.
validateApiKeyperforms database queries that can fail. If an exception occurs (connection error, query failure, etc.), it will propagate uncaught and result in a 500 error with no context.🐛 Proposed fix
export async function authenticateMcpRequest( request: Request, ): Promise<McpContext | null> { const apiKey = request.headers.get("X-API-Key"); if (!apiKey) { return null; } - return validateApiKey(apiKey); + try { + return await validateApiKey(apiKey); + } catch (error) { + console.error("[mcp/auth] API key validation failed:", error); + return null; + } }packages/db/src/schema/schema.ts (3)
276-286: Unique index should include organizationId to avoid cross-org collisions.The current unique index on
(userId, deviceId)prevents the same user/device from appearing in multiple organizations, which breaks multi-org presence tracking.🐛 Suggested fix
- uniqueIndex("device_presence_user_device_idx").on( - table.userId, - table.deviceId, - ), + uniqueIndex("device_presence_user_org_device_idx").on( + table.userId, + table.organizationId, + table.deviceId, + ),
303-307: Add FK forparentCommandIdand usedeviceTypeenum fortargetDeviceType.Parent/child commands should be referentially safe, and device type should be constrained to the same enum used elsewhere to avoid drift.
🛠️ Proposed adjustments
- targetDeviceType: text("target_device_type"), + targetDeviceType: deviceType("target_device_type"), tool: text().notNull(), params: jsonb().$type<Record<string, unknown>>(), - parentCommandId: uuid("parent_command_id"), + parentCommandId: uuid("parent_command_id").references( + () => agentCommands.id, + { onDelete: "set null" }, + ),
349-349: StoreusageCountas an integer, not text.Text counts break numeric ordering and arithmetic (e.g.,
"10" < "2"). Use an integer column with a numeric default.💡 Suggested fix
- usageCount: text("usage_count").notNull().default("0"), + usageCount: integer("usage_count").notNull().default(0),
🧹 Nitpick comments (11)
apps/desktop/src/renderer/routes/_authenticated/settings/devices/components/DevicesSettings/DevicesSettings.tsx (2)
31-42: Loading state causes UI flicker on auto-refresh.Setting
loading=trueon every fetch, including the 10-second interval refreshes, will cause the "Loading..." indicator to flash briefly if the device list is empty or cause visual instability. Consider only showing loading state on initial fetch.♻️ Suggested fix to prevent flicker
const fetchDevices = useCallback(async () => { try { - setLoading(true); setError(null); const result = await apiTrpcClient.device.listOnlineDevices.query(); setDevices(result); } catch (err) { setError(err instanceof Error ? err.message : "Failed to fetch devices"); } finally { setLoading(false); } }, []); + + useEffect(() => { + fetchDevices(); + }, [fetchDevices]);Then update the interval effect to not call
fetchDevicesinitially:useEffect(() => { - fetchDevices(); - // Refresh every 10 seconds const interval = setInterval(fetchDevices, 10_000); return () => clearInterval(interval); }, [fetchDevices]);
44-49: Extract magic number to a named constant.Per coding guidelines, extract hardcoded values to named constants at module top for clarity and maintainability.
♻️ Suggested fix
+const DEVICE_REFRESH_INTERVAL_MS = 10_000; + const DEVICE_ICONS = { desktop: HiOutlineComputerDesktop, ... };Then use it:
- const interval = setInterval(fetchDevices, 10_000); + const interval = setInterval(fetchDevices, DEVICE_REFRESH_INTERVAL_MS);apps/mobile/screens/(authenticated)/index/HomeScreen.tsx (2)
37-45: Use a [domain/operation] prefix in the warning log.
Current prefix is missing the operation portion; consider a more specific prefix. As per coding guidelines, please update the log prefix to follow the required pattern.💡 Suggested change
- console.warn("[devices] Failed to fetch:", err); + console.warn("[devices/fetch] Failed to fetch:", err);
49-52: Extract the refresh interval into a named constant.
This avoids magic numbers and makes future tuning easier. As per coding guidelines, please use a module-level constant.💡 Suggested change
+const DEVICE_REFRESH_MS = 10_000; + export function HomeScreen() { @@ - const interval = setInterval(fetchDevices, 10_000); + const interval = setInterval(fetchDevices, DEVICE_REFRESH_MS);apps/desktop/src/renderer/react-query/workspaces/useCreateWorkspace.ts (1)
38-39: StripskipNavigationbefore passing mutation options.
skipNavigationisn’t a react-query option; avoid leaking custom props intouseMutationoptions by destructuring first.♻️ Proposed refactor
-export function useCreateWorkspace(options?: UseCreateWorkspaceOptions) { +export function useCreateWorkspace(options?: UseCreateWorkspaceOptions) { + const { skipNavigation, onSuccess, ...mutationOptions } = options ?? {}; const navigate = useNavigate(); const utils = electronTrpc.useUtils(); @@ - return electronTrpc.workspaces.create.useMutation({ - ...options, + return electronTrpc.workspaces.create.useMutation({ + ...mutationOptions, onSuccess: async (data, ...rest) => { @@ - if (!options?.skipNavigation) { + if (!skipNavigation) { navigateToWorkspace(data.workspace.id, navigate); } @@ - await options?.onSuccess?.(data, ...rest); + await onSuccess?.(data, ...rest); }, }); }Also applies to: 69-74
apps/desktop/src/main/lib/device-info.ts (2)
20-25: Empty catch block silently swallows errors.The nested try-catch for Linux machine-id sources silently discards the error from the first file read, which makes debugging harder if both files are missing or unreadable. Consider logging for observability.
Proposed fix with minimal logging
} else if (os === "linux") { try { return readFileSync("/etc/machine-id", "utf8").trim(); } catch { - return readFileSync("/var/lib/dbus/machine-id", "utf8").trim(); + try { + return readFileSync("/var/lib/dbus/machine-id", "utf8").trim(); + } catch (e) { + console.warn("[device-info] Could not read Linux machine-id files:", e); + throw e; // Let outer catch handle fallback + } } }
40-42: Add logging context for debugging platform-specific failures.Per coding guidelines, errors should not be silently swallowed. While the fallback is intentional, logging helps diagnose issues when the platform-specific method unexpectedly fails.
Proposed fix
- } catch { - // Fallback if platform-specific method fails + } catch (error) { + console.warn("[device-info] Platform-specific machine ID retrieval failed, using fallback:", error); }apps/desktop/src/renderer/routes/_authenticated/hooks/useCommandWatcher.ts (3)
16-16: Module-level Set may cause stale state during development.The
processingCommandsSet is declared at module level, which means it persists across Hot Module Replacement (HMR) during development. If the module reloads but the Set isn't cleared, previously tracked command IDs could block reprocessing. Consider using a ref inside the hook or clearing on mount.Alternative using useRef
+import { useCallback, useEffect, useMemo, useRef } from "react"; -const processingCommands = new Set<string>(); export function useCommandWatcher() { + const processingCommands = useRef(new Set<string>()); // ... rest of hook // Replace processingCommands.has/add/delete with processingCommands.current.has/add/delete }
94-102: Extract magic number delay to named constant.Per coding guidelines, hardcoded values should be extracted to named constants at module top for clarity and maintainability.
Proposed fix
+const COMMAND_STATUS_TRANSITION_DELAY_MS = 100; + // Inside processCommand: -await new Promise((resolve) => setTimeout(resolve, 100)); +await new Promise((resolve) => setTimeout(resolve, COMMAND_STATUS_TRANSITION_DELAY_MS));
154-159: Side effect inside filter callback is a code smell.Updating collection state inside
filter()makes the code harder to reason about and could cause unexpected behavior if the filter runs multiple times. Consider separating timeout handling from filtering.Proposed fix separating concerns
const now = new Date(); + // Handle timed-out commands separately + const timedOutCommands = pendingCommands.filter((cmd) => { + return cmd.targetDeviceId === deviceInfo.deviceId && + cmd.timeoutAt && new Date(cmd.timeoutAt) < now; + }); + for (const cmd of timedOutCommands) { + collections.agentCommands.update(cmd.id, (draft) => { + draft.status = "timeout"; + draft.error = "Command expired before execution"; + }); + } + const commandsForThisDevice = pendingCommands.filter((cmd) => { if (cmd.targetDeviceId !== deviceInfo.deviceId) return false; if (processingCommands.has(cmd.id)) return false; // Security: verify org matches (don't trust Electric filtering alone) if (cmd.organizationId !== organizationId) { console.warn(`[command-watcher] Org mismatch for ${cmd.id}`); return false; } - if (cmd.timeoutAt && new Date(cmd.timeoutAt) < now) { - collections.agentCommands.update(cmd.id, (draft) => { - draft.status = "timeout"; - draft.error = "Command expired before execution"; - }); - return false; - } + if (cmd.timeoutAt && new Date(cmd.timeoutAt) < now) return false; return true; });apps/desktop/src/renderer/routes/_authenticated/hooks/tools/types.ts (1)
27-31: Prefer a single params object forexecute.This keeps tool handlers consistent with the object‑params guideline and makes call sites more self‑documenting. As per coding guidelines.
♻️ Suggested refactor
export interface ToolDefinition<T extends z.ZodType> { name: string; schema: T; - execute: (params: z.infer<T>, ctx: ToolContext) => Promise<CommandResult>; + execute: (args: { + params: z.infer<T>; + ctx: ToolContext; + }) => Promise<CommandResult>; }
| // Poll for result | ||
| const startTime = Date.now(); | ||
|
|
||
| while (Date.now() - startTime < timeout) { | ||
| const [updated] = await db | ||
| .select() | ||
| .from(agentCommands) | ||
| .where(eq(agentCommands.id, cmd.id)) | ||
| .limit(1); | ||
|
|
||
| if (updated?.status === "completed") { | ||
| return { | ||
| content: [ | ||
| { | ||
| type: "text", | ||
| text: JSON.stringify(updated.result ?? { success: true }, null, 2), | ||
| }, | ||
| ], | ||
| }; | ||
| } | ||
|
|
||
| if (updated?.status === "failed") { | ||
| return { | ||
| content: [ | ||
| { | ||
| type: "text", | ||
| text: `Error: ${updated.error ?? "Command failed"}`, | ||
| }, | ||
| ], | ||
| isError: true, | ||
| }; | ||
| } | ||
|
|
||
| // Wait before next poll | ||
| await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS)); | ||
| } | ||
|
|
||
| // Mark as timeout | ||
| await db | ||
| .update(agentCommands) | ||
| .set({ status: "timeout" }) | ||
| .where(eq(agentCommands.id, cmd.id)); |
There was a problem hiding this comment.
Avoid overwriting completed/failed commands when timing out.
A device can complete between the last poll and the timeout update. Guard the timeout update to only apply to “in‑flight” statuses to avoid clobbering real results.
💡 Suggested fix
- await db
- .update(agentCommands)
- .set({ status: "timeout" })
- .where(eq(agentCommands.id, cmd.id));
+ await db
+ .update(agentCommands)
+ .set({ status: "timeout" })
+ .where(
+ and(
+ eq(agentCommands.id, cmd.id),
+ or(
+ eq(agentCommands.status, "pending"),
+ eq(agentCommands.status, "claimed"),
+ ),
+ ),
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| // Poll for result | |
| const startTime = Date.now(); | |
| while (Date.now() - startTime < timeout) { | |
| const [updated] = await db | |
| .select() | |
| .from(agentCommands) | |
| .where(eq(agentCommands.id, cmd.id)) | |
| .limit(1); | |
| if (updated?.status === "completed") { | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: JSON.stringify(updated.result ?? { success: true }, null, 2), | |
| }, | |
| ], | |
| }; | |
| } | |
| if (updated?.status === "failed") { | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: `Error: ${updated.error ?? "Command failed"}`, | |
| }, | |
| ], | |
| isError: true, | |
| }; | |
| } | |
| // Wait before next poll | |
| await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS)); | |
| } | |
| // Mark as timeout | |
| await db | |
| .update(agentCommands) | |
| .set({ status: "timeout" }) | |
| .where(eq(agentCommands.id, cmd.id)); | |
| // Poll for result | |
| const startTime = Date.now(); | |
| while (Date.now() - startTime < timeout) { | |
| const [updated] = await db | |
| .select() | |
| .from(agentCommands) | |
| .where(eq(agentCommands.id, cmd.id)) | |
| .limit(1); | |
| if (updated?.status === "completed") { | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: JSON.stringify(updated.result ?? { success: true }, null, 2), | |
| }, | |
| ], | |
| }; | |
| } | |
| if (updated?.status === "failed") { | |
| return { | |
| content: [ | |
| { | |
| type: "text", | |
| text: `Error: ${updated.error ?? "Command failed"}`, | |
| }, | |
| ], | |
| isError: true, | |
| }; | |
| } | |
| // Wait before next poll | |
| await new Promise((resolve) => setTimeout(resolve, POLL_INTERVAL_MS)); | |
| } | |
| // Mark as timeout | |
| await db | |
| .update(agentCommands) | |
| .set({ status: "timeout" }) | |
| .where( | |
| and( | |
| eq(agentCommands.id, cmd.id), | |
| or( | |
| eq(agentCommands.status, "pending"), | |
| eq(agentCommands.status, "claimed"), | |
| ), | |
| ), | |
| ); |
🤖 Prompt for AI Agents
In `@apps/api/src/lib/mcp/tools.ts` around lines 1033 - 1074, When marking the
command as "timeout" after the poll loop, avoid clobbering a command that just
completed; change the final db.update(agentCommands).set({ status: "timeout"
}).where(eq(agentCommands.id, cmd.id)) to include an additional guard so the
update only applies to in‑flight statuses (e.g., add conditions that status is
not "completed", not "failed", and not already "timeout" using the same query
builder operators like eq/neq or a notIn check against agentCommands.status and
cmd.id) so that completed/failed results are preserved.
| const found = workspaces.find( | ||
| (ws) => | ||
| ws.name.toLowerCase() === searchName || | ||
| ws.branch.toLowerCase() === searchName, | ||
| ); |
There was a problem hiding this comment.
Potential runtime error if branch is undefined or null.
The ws.branch field may be nullable. Calling .toLowerCase() on undefined or null will throw a TypeError.
🐛 Proposed fix
const found = workspaces.find(
(ws) =>
ws.name.toLowerCase() === searchName ||
- ws.branch.toLowerCase() === searchName,
+ ws.branch?.toLowerCase() === searchName,
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const found = workspaces.find( | |
| (ws) => | |
| ws.name.toLowerCase() === searchName || | |
| ws.branch.toLowerCase() === searchName, | |
| ); | |
| const found = workspaces.find( | |
| (ws) => | |
| ws.name.toLowerCase() === searchName || | |
| ws.branch?.toLowerCase() === searchName, | |
| ); |
🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/routes/_authenticated/hooks/tools/switch-workspace.ts`
around lines 27 - 31, The find predicate can throw when ws.branch is
null/undefined because it calls ws.branch.toLowerCase(); update the predicate
used in workspaces.find to defensively handle nullable branch and name by
comparing lowercased values using safe checks (e.g., use
ws.branch?.toLowerCase() or (ws.branch ?? '').toLowerCase() and same for
ws.name) against searchName so the comparison never calls .toLowerCase() on
null/undefined; keep the symbol references workspaces.find, ws.branch, ws.name,
and searchName when making the change.
| ``` | ||
| ┌─────────────────────────────────────────────────────────────────┐ | ||
| │ RENDERER PROCESS │ | ||
| │ │ | ||
| │ Push-to-Talk ──► Audio Recorder ──► Voice Store ──► UI │ | ||
| │ Hotkey (Web Audio) (Zustand) │ | ||
| │ │ │ | ||
| │ │ base64 audio │ | ||
| │ ▼ │ | ||
| │ tRPC: voice.runAgent.subscribe() │ | ||
| └─────────────────────────────────────────────────────────────────┘ | ||
| │ | ||
| ▼ | ||
| ┌─────────────────────────────────────────────────────────────────┐ | ||
| │ MAIN PROCESS │ | ||
| │ │ | ||
| │ ┌──────────────────────────────────────────────────────────┐ │ | ||
| │ │ voiceRouter.runAgent (tRPC subscription) │ │ | ||
| │ │ │ │ | ||
| │ │ 1. Transcribe audio (WisprFlow API) │ │ | ||
| │ │ 2. Stream to Claude Agent SDK with custom tools │ │ | ||
| │ │ 3. Forward messages to renderer │ │ | ||
| │ └──────────────────────────────────────────────────────────┘ │ | ||
| │ │ │ | ||
| │ ▼ │ | ||
| │ ┌──────────────────────────────────────────────────────────┐ │ | ||
| │ │ Custom Tools (called by Agent SDK) │ │ | ||
| │ │ - createWorktree({ name, branch?, baseBranch? }) │ │ | ||
| │ │ - listWorktrees({ projectId? }) │ │ | ||
| │ │ - getCurrentContext() │ │ | ||
| │ └──────────────────────────────────────────────────────────┘ │ | ||
| └─────────────────────────────────────────────────────────────────┘ | ||
| │ | ||
| ▼ (spawned by SDK) | ||
| ┌─────────────────────────────────────────────────────────────────┐ | ||
| │ CLAUDE CODE CLI │ | ||
| │ Agent loop, tool execution, API calls │ | ||
| └─────────────────────────────────────────────────────────────────┘ | ||
| ``` |
There was a problem hiding this comment.
Add a language to the fenced code block.
Markdownlint flagged MD040 here. Please specify a language (e.g., text) after the opening backticks.
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
7-7: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@apps/desktop/VOICE_AGENT_PLAN.md` around lines 7 - 45, The fenced ASCII
diagram block in VOICE_AGENT_PLAN.md (the code block starting with the triple
backticks that contains the "RENDERER PROCESS" / "MAIN PROCESS" / "CLAUDE CODE
CLI" diagram) must include a language identifier to satisfy markdownlint MD040;
update the opening fence from ``` to something like ```text (or ```plain) so the
diagram is treated as a specific language and the linter warning is resolved.
| export async function transcribeAudio(audioBase64: string): Promise<string> { | ||
| const apiKey = await getStoredApiKey("wisprflow"); | ||
|
|
||
| const response = await fetch(WISPRFLOW_API, { | ||
| method: "POST", | ||
| headers: { | ||
| "Authorization": `Bearer ${apiKey}`, | ||
| "Content-Type": "application/json", | ||
| }, | ||
| body: JSON.stringify({ audio: audioBase64, format: "wav" }), | ||
| }); | ||
|
|
||
| const result = await response.json(); | ||
| return result.text; | ||
| } |
There was a problem hiding this comment.
Handle non‑200 responses and add timeouts for WisprFlow.
Plan should include response status checks, error mapping, and a request timeout to avoid hanging UI when transcription fails or stalls.
🤖 Prompt for AI Agents
In `@apps/desktop/VOICE_AGENT_PLAN.md` around lines 210 - 224, transcribeAudio
currently assumes fetch succeeds and returns JSON; update it to handle non-200
statuses and timeouts: create an AbortController with a configurable timeout,
pass its signal to fetch(WISPRFLOW_API), check response.ok and map common status
codes to meaningful error messages, parse JSON only on ok responses, and throw
or return a structured error when status !== 200; also include getStoredApiKey
and WISPRFLOW_API in the error context so callers can log which key/endpoint
failed.
| ``` | ||
| apps/desktop/src/ | ||
| ├── main/lib/ | ||
| │ └── wisprflow.ts # NEW: Transcription API | ||
| ├── renderer/ | ||
| │ ├── lib/audio-recorder.ts # NEW: Web Audio recording | ||
| │ ├── stores/voice/store.ts # NEW: Voice state | ||
| │ ├── hooks/useVoiceCommand.ts # NEW: Hotkey + recording logic | ||
| │ └── components/VoiceOverlay/ # NEW: UI | ||
| ├── lib/trpc/routers/voice/ # NEW: tRPC router + tools | ||
| └── shared/hotkeys.ts # UPDATE: Add VOICE_PUSH_TO_TALK | ||
| ``` |
There was a problem hiding this comment.
Add a language to the fenced code block.
Markdownlint flagged MD040 here as well; add a language specifier after the opening fence.
🧰 Tools
🪛 markdownlint-cli2 (0.18.1)
301-301: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🤖 Prompt for AI Agents
In `@apps/desktop/VOICE_AGENT_PLAN.md` around lines 301 - 312, The fenced code
block containing the directory tree (the block that starts with ``` and lists
apps/desktop/src/ and its subfolders) is missing a language specifier which
triggers MD040; update that opening fence to include a language (for example
```text or ```bash) so the code block has an explicit language and the Markdown
linter stops flagging it.
There was a problem hiding this comment.
Actionable comments posted: 5
🤖 Fix all issues with AI agents
In `@apps/desktop/src/main/lib/device-info.ts`:
- Around line 20-42: The platform-specific ID lookup currently swallows errors;
update each catch (around the linux readFileSync branches and the win32
execFileSync branch and the outer catch) to capture the error (e.g., catch
(err)) and log a contextual message (prefix like "device-id: failed to read
machine id for <platform>") including the error details but not the raw
machine-id; use the project logger if available (fall back to console.error) and
keep the message concise so failures in readFileSync and execFileSync are
visible for debugging without leaking IDs.
In
`@apps/desktop/src/renderer/routes/_authenticated/hooks/tools/navigate-to-workspace.ts`:
- Around line 26-31: The matching code in navigate-to-workspace.ts uses
ws.branch.toLowerCase() which will throw if branch is null/undefined; update the
find predicate to guard branch (e.g., use optional chaining or a null check)
when calling toLowerCase so it only compares when ws.branch exists, keeping the
existing comparison to ws.name.toLowerCase() and using the same searchName
variable.
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/utils/settings-search/settings-search.ts`:
- Around line 576-609: search results currently ignore feature gating and return
"apikeys" entries even when AGENT_COMMANDS_ACCESS is disabled; update
searchSettings() (or getMatchingItemsForSection()) to pre-filter the items list
by feature eligibility before doing query matching — specifically exclude
SETTING_ITEM_ID.API_KEYS_LIST and SETTING_ITEM_ID.API_KEYS_GENERATE (or any
items with section === "apikeys") when the AGENT_COMMANDS_ACCESS flag is false
by using your existing feature-check helper (isFeatureEnabled or equivalent) so
visibleItems only contains allowed entries; also add route-level protection to
the /settings/api-keys page (ensure the route rendering the API Keys component
checks AGENT_COMMANDS_ACCESS and redirects/returns null when disabled) to keep
search and navigation behavior consistent.
In `@apps/mobile/screens/`(authenticated)/index/HomeScreen.tsx:
- Around line 19-28: Replace the manually declared OnlineDevice interface with
the TRPC-derived type to stay in sync with the server: remove the local
interface named OnlineDevice and instead import RouterOutputs from
"@superset/trpc" and create a type alias OnlineDevice =
RouterOutputs["device"]["listOnlineDevices"][number] so the component uses the
canonical device.listOnlineDevices output type (which includes createdAt and any
other fields).
In `@packages/trpc/src/router/agent/api-keys.ts`:
- Around line 134-220: The revoke and update mutations (revoke and update)
currently only filter by userId and can affect API keys across organizations;
extract organizationId from ctx.session.session.activeOrganizationId (same
pattern as generate/list), validate it exists (throw TRPCError BAD_REQUEST if
missing), and add an additional where clause eq(apiKeys.organizationId,
organizationId) to both db.update(...).where(...) calls so the queries include
organization scoping; ensure the validation and organizationId variable are used
in both the revoke and update procedures before running the DB update.
♻️ Duplicate comments (18)
apps/mobile/package.json (1)
35-35: Use tilde versioning for expo-crypto to match Expo SDK 54 specification.The past review correctly identified this issue. Expo SDK 54 documentation specifies
expo-crypto@~15.0.8. Using^15.0.8may introduce incompatible minor versions..mcp.json (1)
6-8: Environment variable name mismatch with .env.example.The past review correctly identified this issue. This file references
SUPERSET_API_KEY, but.env.exampledefinesSUPERSET_MCP_API_KEY. This mismatch will cause missing API key headers and authentication failures."headers": { - "X-API-Key": "${SUPERSET_API_KEY}" + "X-API-Key": "${SUPERSET_MCP_API_KEY}" }apps/mobile/hooks/useDevicePresence/useDevicePresence.ts (1)
12-20: Don’t swallow SecureStore / heartbeat errors; log with context.Silent failures make device presence debugging impossible and violate the logging requirement. Please log with a prefixed context and keep noise low (warn/debug).
🔧 Suggested fix
async function getOrCreateDeviceId(): Promise<string> { - const existingId = await SecureStore.getItemAsync(DEVICE_ID_KEY).catch( - () => null, - ); + let existingId: string | null = null; + try { + existingId = await SecureStore.getItemAsync(DEVICE_ID_KEY); + } catch (error) { + console.warn( + "[devicePresence/getOrCreateDeviceId] Failed to read device id", + error, + ); + } if (existingId) return existingId; const newId = randomUUID(); - await SecureStore.setItemAsync(DEVICE_ID_KEY, newId).catch(() => {}); + try { + await SecureStore.setItemAsync(DEVICE_ID_KEY, newId); + } catch (error) { + console.warn( + "[devicePresence/getOrCreateDeviceId] Failed to persist device id", + error, + ); + } return newId; } @@ - } catch { - // Heartbeat can fail when offline - ignore + } catch (error) { + console.debug( + "[devicePresence/heartbeat] Heartbeat failed (offline?)", + error, + ); }Based on learnings, never swallow errors silently; log them with prefixed context.
Also applies to: 38-48
apps/desktop/src/renderer/routes/_authenticated/hooks/useDevicePresence.ts (1)
16-24: Don’t swallow heartbeat errors; log with context.The catch block hides connectivity/auth issues. Log the error with a prefixed context (and optionally device identifiers) to keep failures observable.
🔧 Suggested fix
- } catch { - // Heartbeat can fail when offline - ignore - } + } catch (error) { + console.warn( + "[devicePresence/heartbeat] failed", + { + deviceId: deviceInfo.deviceId, + deviceName: deviceInfo.deviceName, + }, + error, + ); + }Based on learnings, never swallow errors silently; log them with prefixed context.
apps/desktop/src/renderer/routes/_authenticated/providers/CollectionsProvider/collections.ts (1)
203-233: Inconsistenttxidhandling persists.The
onUpdatehandler returnsDate.now()as a fallback (line 219) and coerces the server's BigInt to Number (line 230). This was previously flagged - the server-sideagent.updateCommandshould usegetCurrentTxid(tx)within a transaction like other routers, eliminating these client-side workarounds.apps/desktop/src/renderer/routes/_authenticated/settings/api-keys/components/ApiKeysSettings/ApiKeysSettings.tsx (3)
110-117: Add confirmation before revoking API keys.Revoking is irreversible since the key is cryptographically hashed. Consider adding
window.confirm()or a confirmation dialog before calling the revoke mutation.
119-123: Handle clipboard errors and extract magic number.
navigator.clipboard.writeText()can fail if permissions are denied. Also, extract2000to a named constant per coding guidelines.
259-261: Guard Enter key against double submission.Add
!isGeneratingcheck to prevent rapid Enter presses from triggering multiple API calls while generation is in progress.apps/desktop/VOICE_AGENT_PLAN.md (5)
7-45: Add language specifier to fenced code block.The ASCII diagram code block is missing a language identifier (e.g.,
```text), which triggers markdownlint MD040.
73-101: Implementation note: AudioRecorder needs resource cleanup.When implementing this plan, ensure MediaStream tracks are stopped after recording, add error handling for
getUserMedia, and include adispose()method for component unmount cleanup.
210-224: Implementation note: Add error handling for transcription.When implementing, add response status checks, timeouts via AbortController, and proper error mapping for the WisprFlow API call.
265-297: Implementation note: Hook needs subscription cleanup.When implementing
useVoiceCommand, store the subscription reference and unsubscribe on unmount. Also dispose the AudioRecorder to release media streams.
301-312: Add language specifier to fenced code block.The file structure code block should include a language identifier (e.g.,
```text).packages/db/drizzle/0013_add_mcp_actions_and_presence.sql (2)
7-8: Use device_type enum for target_device_type.
Keeps values validated and aligned with the enum you already created.💡 Suggested fix
- "target_device_type" text, + "target_device_type" "device_type",
31-32: Store usage_count as numeric.
Text storage breaks numeric ordering and atomic increments.💡 Suggested fix
- "usage_count" text DEFAULT '0' NOT NULL, + "usage_count" bigint DEFAULT 0 NOT NULL,packages/db/src/schema/schema.ts (3)
281-284: Unique index should include organizationId to prevent cross-org collisions.The current index
(userId, deviceId)prevents the same user+device from appearing in multiple organizations, breaking multi-org presence tracking.🐛 Suggested fix
- uniqueIndex("device_presence_user_device_idx").on( + uniqueIndex("device_presence_user_org_device_idx").on( table.userId, + table.organizationId, table.deviceId, ),
303-307: Add FK for parentCommandId and use deviceType enum for type safety.
targetDeviceTypeusestextinstead of thedeviceTypeenum defined at line 32, risking invalid values.parentCommandIdlacks a foreign key reference, allowing orphaned parent references.🛠️ Suggested fix
targetDeviceId: text("target_device_id"), - targetDeviceType: text("target_device_type"), + targetDeviceType: deviceType("target_device_type"), tool: text().notNull(), params: jsonb().$type<Record<string, unknown>>(), - parentCommandId: uuid("parent_command_id"), + parentCommandId: uuid("parent_command_id").references( + () => agentCommands.id, + { onDelete: "set null" }, + ),
349-349: **Store usageCount as an integer, not text.**Text counts break numeric ordering ("10" < "2") and arithmetic. Useinteger().default(0)for proper numeric handling.💡 Suggested fix
- usageCount: text("usage_count").notNull().default("0"), + usageCount: integer("usage_count").notNull().default(0),
🧹 Nitpick comments (10)
apps/desktop/src/renderer/routes/_authenticated/hooks/tools/navigate-to-workspace.ts (1)
13-16: Prefer object params forexecute.Team guideline recommends a single params object for functions with 2+ parameters. Consider
{ params, ctx }and update the tool runner accordingly. As per coding guidelines, this keeps signatures self-documenting and extensible.apps/desktop/src/renderer/routes/_authenticated/hooks/tools/list-workspaces.ts (2)
6-9: Prefer object params forexecute.Team guideline recommends a single params object for functions with 2+ parameters. Consider
{ params, ctx }and update the tool runner accordingly. As per coding guidelines, this keeps signatures self-documenting and extensible.
16-19: Avoid double-castingworkspacestoRecord<string, unknown>[].The
as unknown ascast bypasses type safety. Prefer mapping to an explicit DTO (e.g.,{ id, name, branch }) or broadeningCommandResultto accept typed payloads. As per coding guidelines, keep strong typing instead of unsafe casts.apps/desktop/src/renderer/routes/_authenticated/hooks/tools/list-projects.ts (1)
6-9: Prefer object params forexecute.Team guideline recommends a single params object for functions with 2+ parameters. Consider
{ params, ctx }and update the tool runner accordingly. As per coding guidelines, this keeps signatures self-documenting and extensible.apps/mobile/screens/(authenticated)/index/HomeScreen.tsx (2)
37-44: Align console prefix with the required [domain/operation] pattern.The log prefix is close but missing the operation part. Consider making it explicit to match the guideline.
🔧 Suggested adjustment
- } catch (err) { - console.warn("[devices] Failed to fetch:", err); + } catch (err) { + console.warn("[device/listOnlineDevices] Failed to fetch online devices:", err); } finally {As per coding guidelines, ...
49-60: Extract polling and time constants to named values.This keeps magic numbers centralized and self-documenting.
♻️ Suggested refactor
+const DEVICE_POLL_INTERVAL_MS = 10_000; +const MILLISECONDS_IN_SECOND = 1_000; +const SECONDS_IN_MINUTE = 60; +const MINUTES_IN_HOUR = 60; + export function HomeScreen() { @@ useEffect(() => { fetchDevices(); - const interval = setInterval(fetchDevices, 10_000); + const interval = setInterval(fetchDevices, DEVICE_POLL_INTERVAL_MS); return () => clearInterval(interval); }, [fetchDevices]); const formatLastSeen = (date: Date) => { - const seconds = Math.floor((Date.now() - new Date(date).getTime()) / 1000); - if (seconds < 60) return `${seconds}s ago`; - const minutes = Math.floor(seconds / 60); - if (minutes < 60) return `${minutes}m ago`; + const seconds = Math.floor( + (Date.now() - date.getTime()) / MILLISECONDS_IN_SECOND, + ); + if (seconds < SECONDS_IN_MINUTE) return `${seconds}s ago`; + const minutes = Math.floor(seconds / SECONDS_IN_MINUTE); + if (minutes < MINUTES_IN_HOUR) return `${minutes}m ago`; return new Date(date).toLocaleTimeString(); };As per coding guidelines, ...
apps/desktop/plans/20260123-mcp-layout-tools.md (1)
109-119: Align planned ToolContext signatures with params-object guideline.The planned signatures use positional args for 2+ params (e.g.,
setTabLayout,createTab,setActiveTab). Consider documenting them as a single params object to match the project’s TS style and keep future extensibility clean.♻️ Proposed doc update
- setTabLayout: (tabId: string, layout: LayoutNode) => { createdPanes: string[] } - createTab: (workspaceId: string, layout?: LayoutNode) => { tabId: string } - setActiveTab: (workspaceId: string, tabId: string) => void + setTabLayout: (params: { tabId: string; layout: LayoutNode }) => { createdPanes: string[] } + createTab: (params: { workspaceId: string; layout?: LayoutNode }) => { tabId: string } + setActiveTab: (params: { workspaceId: string; tabId: string }) => voidAs per coding guidelines, use a params object for functions with 2+ parameters.
apps/desktop/src/main/lib/device-info.ts (1)
77-95: Consider tightening device-name sanitization or update the docstring.The comment says it avoids PII by not using hostnames, but the function still returns the hostname in many cases (which often includes the user’s name). If the intent is to avoid PII, consider switching to an OS-only label or a user-provided display name; otherwise, adjust the comment to match behavior.
apps/desktop/src/renderer/routes/_authenticated/settings/api-keys/components/ApiKeysSettings/ApiKeysSettings.tsx (1)
73-87: Consider cleanup for async effect.The
loadApiKeyseffect lacks cancellation handling. If the component unmounts during the fetch,setApiKeysandsetIsLoadingwill attempt to update unmounted state.Proposed fix with AbortController
const loadApiKeys = useCallback(async () => { + const controller = new AbortController(); try { setIsLoading(true); - const keys = await apiTrpcClient.apiKeys.list.query(); + const keys = await apiTrpcClient.apiKeys.list.query(undefined, { + signal: controller.signal, + }); setApiKeys(keys); } catch (error) { + if (error.name === 'AbortError') return; console.error("[api-keys] Failed to load API keys:", error); } finally { setIsLoading(false); } + return () => controller.abort(); }, []); useEffect(() => { - loadApiKeys(); -}, [loadApiKeys]); + const cleanup = loadApiKeys(); + return () => cleanup?.(); +}, []);packages/db/src/schema/schema.ts (1)
356-356: Consider making keyHash a unique index.While cryptographic hash collisions are extremely rare, making the index unique provides an additional integrity guarantee and potentially faster point lookups. This also prevents accidental duplicates if the hashing logic has bugs.
♻️ Suggested change
- index("api_keys_key_hash_idx").on(table.keyHash), + uniqueIndex("api_keys_key_hash_idx").on(table.keyHash),
| } else if (os === "linux") { | ||
| try { | ||
| return readFileSync("/etc/machine-id", "utf8").trim(); | ||
| } catch { | ||
| return readFileSync("/var/lib/dbus/machine-id", "utf8").trim(); | ||
| } | ||
| } else if (os === "win32") { | ||
| const output = execFileSync( | ||
| "reg", | ||
| [ | ||
| "query", | ||
| "HKLM\\SOFTWARE\\Microsoft\\Cryptography", | ||
| "/v", | ||
| "MachineGuid", | ||
| ], | ||
| { encoding: "utf8" }, | ||
| ); | ||
| const match = output.match(/MachineGuid\s+REG_SZ\s+(\S+)/); | ||
| if (match?.[1]) return match[1]; | ||
| } | ||
| } catch { | ||
| // Fallback if platform-specific method fails | ||
| } |
There was a problem hiding this comment.
Log platform ID lookup failures instead of silently ignoring them.
These catch blocks currently hide failures and make device-id instability hard to diagnose. Please log with a prefixed context (avoid logging raw IDs).
🔧 Suggested fix
} else if (os === "linux") {
try {
return readFileSync("/etc/machine-id", "utf8").trim();
- } catch {
+ } catch (error) {
+ console.warn(
+ "[deviceInfo/getRawMachineId] Failed /etc/machine-id, trying dbus",
+ error,
+ );
return readFileSync("/var/lib/dbus/machine-id", "utf8").trim();
}
} else if (os === "win32") {
@@
- } catch {
- // Fallback if platform-specific method fails
+ } catch (error) {
+ console.warn(
+ "[deviceInfo/getRawMachineId] Platform lookup failed; using fallback",
+ error,
+ );
}As per coding guidelines, never swallow errors silently; log them with prefixed context.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| } else if (os === "linux") { | |
| try { | |
| return readFileSync("/etc/machine-id", "utf8").trim(); | |
| } catch { | |
| return readFileSync("/var/lib/dbus/machine-id", "utf8").trim(); | |
| } | |
| } else if (os === "win32") { | |
| const output = execFileSync( | |
| "reg", | |
| [ | |
| "query", | |
| "HKLM\\SOFTWARE\\Microsoft\\Cryptography", | |
| "/v", | |
| "MachineGuid", | |
| ], | |
| { encoding: "utf8" }, | |
| ); | |
| const match = output.match(/MachineGuid\s+REG_SZ\s+(\S+)/); | |
| if (match?.[1]) return match[1]; | |
| } | |
| } catch { | |
| // Fallback if platform-specific method fails | |
| } | |
| } else if (os === "linux") { | |
| try { | |
| return readFileSync("/etc/machine-id", "utf8").trim(); | |
| } catch (error) { | |
| console.warn( | |
| "[deviceInfo/getRawMachineId] Failed /etc/machine-id, trying dbus", | |
| error, | |
| ); | |
| return readFileSync("/var/lib/dbus/machine-id", "utf8").trim(); | |
| } | |
| } else if (os === "win32") { | |
| const output = execFileSync( | |
| "reg", | |
| [ | |
| "query", | |
| "HKLM\\SOFTWARE\\Microsoft\\Cryptography", | |
| "/v", | |
| "MachineGuid", | |
| ], | |
| { encoding: "utf8" }, | |
| ); | |
| const match = output.match(/MachineGuid\s+REG_SZ\s+(\S+)/); | |
| if (match?.[1]) return match[1]; | |
| } | |
| } catch (error) { | |
| console.warn( | |
| "[deviceInfo/getRawMachineId] Platform lookup failed; using fallback", | |
| error, | |
| ); | |
| } |
🤖 Prompt for AI Agents
In `@apps/desktop/src/main/lib/device-info.ts` around lines 20 - 42, The
platform-specific ID lookup currently swallows errors; update each catch (around
the linux readFileSync branches and the win32 execFileSync branch and the outer
catch) to capture the error (e.g., catch (err)) and log a contextual message
(prefix like "device-id: failed to read machine id for <platform>") including
the error details but not the raw machine-id; use the project logger if
available (fall back to console.error) and keep the message concise so failures
in readFileSync and execFileSync are visible for debugging without leaking IDs.
| const searchName = params.workspaceName.toLowerCase(); | ||
| const found = workspaces.find( | ||
| (ws) => | ||
| ws.name.toLowerCase() === searchName || | ||
| ws.branch.toLowerCase() === searchName, | ||
| ); |
There was a problem hiding this comment.
Guard against nullable branch during name matching.
ws.branch.toLowerCase() will throw if branch is null/undefined. Use optional chaining to avoid runtime failures when a workspace doesn’t have a branch.
🔧 Proposed fix
- const found = workspaces.find(
- (ws) =>
- ws.name.toLowerCase() === searchName ||
- ws.branch.toLowerCase() === searchName,
- );
+ const found = workspaces.find(
+ (ws) =>
+ ws.name.toLowerCase() === searchName ||
+ ws.branch?.toLowerCase() === searchName,
+ );📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const searchName = params.workspaceName.toLowerCase(); | |
| const found = workspaces.find( | |
| (ws) => | |
| ws.name.toLowerCase() === searchName || | |
| ws.branch.toLowerCase() === searchName, | |
| ); | |
| const searchName = params.workspaceName.toLowerCase(); | |
| const found = workspaces.find( | |
| (ws) => | |
| ws.name.toLowerCase() === searchName || | |
| ws.branch?.toLowerCase() === searchName, | |
| ); |
🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/routes/_authenticated/hooks/tools/navigate-to-workspace.ts`
around lines 26 - 31, The matching code in navigate-to-workspace.ts uses
ws.branch.toLowerCase() which will throw if branch is null/undefined; update the
find predicate to guard branch (e.g., use optional chaining or a null check)
when calling toLowerCase so it only compares when ws.branch exists, keeping the
existing comparison to ws.name.toLowerCase() and using the same searchName
variable.
| { | ||
| id: SETTING_ITEM_ID.API_KEYS_LIST, | ||
| section: "apikeys", | ||
| title: "API Keys", | ||
| description: "Manage API keys for MCP server access", | ||
| keywords: [ | ||
| "api", | ||
| "key", | ||
| "keys", | ||
| "mcp", | ||
| "claude", | ||
| "integration", | ||
| "external", | ||
| "access", | ||
| "token", | ||
| "authentication", | ||
| ], | ||
| }, | ||
| { | ||
| id: SETTING_ITEM_ID.API_KEYS_GENERATE, | ||
| section: "apikeys", | ||
| title: "Generate API Key", | ||
| description: "Create new API keys for external integrations", | ||
| keywords: [ | ||
| "api", | ||
| "key", | ||
| "generate", | ||
| "create", | ||
| "new", | ||
| "mcp", | ||
| "claude desktop", | ||
| "claude code", | ||
| ], | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify SettingsSection definition and API keys visibility gating.
rg -n "SettingsSection" apps/desktop/src -S
rg -n "apikeys|API_KEYS" apps/desktop/src/renderer -S
rg -n "AGENT_COMMANDS_ACCESS" apps/desktop/src -SRepository: superset-sh/superset
Length of output: 4416
🏁 Script executed:
#!/bin/bash
# Check SettingsSection type definition
sed -n '8,25p' apps/desktop/src/renderer/stores/settings-state.ts
# Check gating in GeneralSettings
sed -n '100,125p' apps/desktop/src/renderer/routes/_authenticated/settings/components/SettingsSidebar/GeneralSettings.tsx
# Check how search filters results
sed -n '620,650p' apps/desktop/src/renderer/routes/_authenticated/settings/utils/settings-search/settings-search.tsRepository: superset-sh/superset
Length of output: 1992
🏁 Script executed:
#!/bin/bash
# Check the searchSettings function implementation
sed -n '1,75p' apps/desktop/src/renderer/routes/_authenticated/settings/utils/settings-search/settings-search.ts | tail -30
# Find searchSettings function definition
rg -n "function searchSettings|const searchSettings" apps/desktop/src/renderer/routes/_authenticated/settings/utils/settings-search/settings-search.ts -A 10Repository: superset-sh/superset
Length of output: 1041
🏁 Script executed:
#!/bin/bash
# Check the API Keys page and how it uses search
cat -n apps/desktop/src/renderer/routes/_authenticated/settings/api-keys/page.tsx
# Check if visibleItems filtering is applied anywhere
rg -n "isItemVisible|visibleItems" apps/desktop/src/renderer/routes/_authenticated/settings -A 3Repository: superset-sh/superset
Length of output: 45145
🏁 Script executed:
#!/bin/bash
# Check if api-keys route or layout has feature flag protection
rg -n "AGENT_COMMANDS_ACCESS|requiresAgentCommands" apps/desktop/src/renderer/routes/_authenticated/settings/api-keys -A 2 -B 2
# Check settings layout for route guards
rg -n "protectedRoutes|requiresAuth|FEATURE_FLAGS" apps/desktop/src/renderer/routes/_authenticated/settings/layout.tsx -A 2Repository: superset-sh/superset
Length of output: 46
Fix search filtering to gate feature-flagged sections.
The "apikeys" section is registered in SettingsSection type and the navigation sidebar properly filters it based on AGENT_COMMANDS_ACCESS feature flag. However, the search results do not apply this filtering: searchSettings() returns all items regardless of feature flags, and getMatchingItemsForSection() filters only by section name, not by feature flag status. This means search results can expose API Keys items even when the feature is disabled, creating an inconsistency with the sidebar filtering.
visibleItems mechanism only filters search results by query match, not by feature flag eligibility. Additionally, the /settings/api-keys route has no feature flag protection.
Apply feature flag checks to searchSettings() or getMatchingItemsForSection() to ensure feature-gated items do not appear in search results when disabled, and add route-level protection to the API Keys page.
🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/routes/_authenticated/settings/utils/settings-search/settings-search.ts`
around lines 576 - 609, search results currently ignore feature gating and
return "apikeys" entries even when AGENT_COMMANDS_ACCESS is disabled; update
searchSettings() (or getMatchingItemsForSection()) to pre-filter the items list
by feature eligibility before doing query matching — specifically exclude
SETTING_ITEM_ID.API_KEYS_LIST and SETTING_ITEM_ID.API_KEYS_GENERATE (or any
items with section === "apikeys") when the AGENT_COMMANDS_ACCESS flag is false
by using your existing feature-check helper (isFeatureEnabled or equivalent) so
visibleItems only contains allowed entries; also add route-level protection to
the /settings/api-keys page (ensure the route rendering the API Keys component
checks AGENT_COMMANDS_ACCESS and redirects/returns null when disabled) to keep
search and navigation behavior consistent.
| interface OnlineDevice { | ||
| id: string; | ||
| deviceId: string; | ||
| deviceName: string; | ||
| deviceType: "desktop" | "mobile" | "web"; | ||
| lastSeenAt: Date; | ||
| ownerId: string; | ||
| ownerName: string; | ||
| ownerEmail: string; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Find TRPC router files and device-related operations
find . -type f \( -name "*.ts" -o -name "*.tsx" \) | xargs grep -l "listOnlineDevices" | head -20Repository: superset-sh/superset
Length of output: 280
🏁 Script executed:
# Search for TRPC router setup and AppRouter definition
rg -n "AppRouter|RouterOutputs" --type=ts --type=tsx | grep -E "(export|type)" | head -30Repository: superset-sh/superset
Length of output: 91
🏁 Script executed:
# Look for the device router implementation
rg -n "device.*router|listOnlineDevices" --type=ts --type=tsx -A 5 | head -50Repository: superset-sh/superset
Length of output: 91
🏁 Script executed:
# Check the current HomeScreen.tsx imports and full context around lines 19-28
head -60 apps/mobile/screens/authenticated/index/HomeScreen.tsx 2>/dev/null || find . -name "HomeScreen.tsx" -path "*/mobile/screens/*" | xargs head -60Repository: superset-sh/superset
Length of output: 1860
🏁 Script executed:
# Read the device router to see listOnlineDevices return type
cat packages/trpc/src/router/device/device.tsRepository: superset-sh/superset
Length of output: 2546
🏁 Script executed:
# Look for TRPC index/setup files that export AppRouter and RouterOutputs
find packages/trpc/src -name "*.ts" | xargs grep -l "export.*AppRouter\|export.*RouterOutputs" | head -5Repository: superset-sh/superset
Length of output: 116
🏁 Script executed:
# Check the TRPC index.ts for type exports
cat packages/trpc/src/index.tsRepository: superset-sh/superset
Length of output: 370
🏁 Script executed:
# Check the TRPC root.ts for AppRouter definition
cat packages/trpc/src/root.tsRepository: superset-sh/superset
Length of output: 1199
🏁 Script executed:
# Check if HomeScreen can import from the TRPC package
grep -n "import.*from.*trpc\|@superset/trpc" apps/mobile/screens/authenticated/index/HomeScreen.tsxRepository: superset-sh/superset
Length of output: 151
🏁 Script executed:
# Check the package.json or alias setup to see if `@superset/trpc` is accessible from mobile
find . -name "package.json" -o -name "tsconfig*.json" -o -name "*.config.*" | xargs grep -l "@superset/trpc\|path.*trpc" | head -10Repository: superset-sh/superset
Length of output: 221
🏁 Script executed:
# Check if HomeScreen is using the full listOnlineDevices return or just parts of it
grep -n "devices\|OnlineDevice" apps/mobile/screens/authenticated/index/HomeScreen.tsx | head -20Repository: superset-sh/superset
Length of output: 151
🏁 Script executed:
# Find the actual HomeScreen.tsx file
find . -name "HomeScreen.tsx" -type fRepository: superset-sh/superset
Length of output: 122
🏁 Script executed:
# List all files in mobile screens directory
find apps/mobile/screens -type f -name "*.tsx" -o -name "*.ts" 2>/dev/null | head -20Repository: superset-sh/superset
Length of output: 373
🏁 Script executed:
# Read the HomeScreen.tsx file with the correct path (with parentheses)
cat "apps/mobile/screens/(authenticated)/index/HomeScreen.tsx"Repository: superset-sh/superset
Length of output: 7235
🏁 Script executed:
# Check the mobile app's package.json for dependency access to `@superset/trpc`
cat apps/mobile/package.jsonRepository: superset-sh/superset
Length of output: 2016
🏁 Script executed:
# Check tsconfig or path aliases in mobile app
cat apps/mobile/tsconfig.json 2>/dev/null || cat apps/mobile/tsconfig.*.json 2>/dev/null || echo "No tsconfig found"Repository: superset-sh/superset
Length of output: 300
Use TRPC RouterOutputs type instead of manually duplicating the OnlineDevice interface.
The local interface is missing the createdAt field that the actual device.listOnlineDevices query returns, risking runtime type mismatches. Import the type from TRPC instead:
import type { RouterOutputs } from "@superset/trpc";
type OnlineDevice = RouterOutputs["device"]["listOnlineDevices"][number];This eliminates duplication and ensures the interface stays in sync with the server schema.
🤖 Prompt for AI Agents
In `@apps/mobile/screens/`(authenticated)/index/HomeScreen.tsx around lines 19 -
28, Replace the manually declared OnlineDevice interface with the TRPC-derived
type to stay in sync with the server: remove the local interface named
OnlineDevice and instead import RouterOutputs from "@superset/trpc" and create a
type alias OnlineDevice = RouterOutputs["device"]["listOnlineDevices"][number]
so the component uses the canonical device.listOnlineDevices output type (which
includes createdAt and any other fields).
| revoke: protectedProcedure | ||
| .input(z.object({ id: z.string().uuid() })) | ||
| .mutation(async ({ ctx, input }) => { | ||
| const userId = ctx.session.user.id; | ||
|
|
||
| // Verify ownership and revoke | ||
| const [revoked] = await db | ||
| .update(apiKeys) | ||
| .set({ revokedAt: new Date() }) | ||
| .where( | ||
| and( | ||
| eq(apiKeys.id, input.id), | ||
| eq(apiKeys.userId, userId), | ||
| isNull(apiKeys.revokedAt), | ||
| ), | ||
| ) | ||
| .returning({ id: apiKeys.id }); | ||
|
|
||
| if (!revoked) { | ||
| throw new TRPCError({ | ||
| code: "NOT_FOUND", | ||
| message: "API key not found or already revoked", | ||
| }); | ||
| } | ||
|
|
||
| return { success: true, revokedAt: new Date() }; | ||
| }), | ||
|
|
||
| /** | ||
| * Update an API key (name, default device) | ||
| */ | ||
| update: protectedProcedure | ||
| .input( | ||
| z.object({ | ||
| id: z.string().uuid(), | ||
| name: z.string().min(1).max(100).optional(), | ||
| defaultDeviceId: z.string().nullable().optional(), | ||
| }), | ||
| ) | ||
| .mutation(async ({ ctx, input }) => { | ||
| const userId = ctx.session.user.id; | ||
|
|
||
| const updates: Partial<{ | ||
| name: string; | ||
| defaultDeviceId: string | null; | ||
| }> = {}; | ||
|
|
||
| if (input.name !== undefined) { | ||
| updates.name = input.name; | ||
| } | ||
| if (input.defaultDeviceId !== undefined) { | ||
| updates.defaultDeviceId = input.defaultDeviceId; | ||
| } | ||
|
|
||
| if (Object.keys(updates).length === 0) { | ||
| throw new TRPCError({ | ||
| code: "BAD_REQUEST", | ||
| message: "No updates provided", | ||
| }); | ||
| } | ||
|
|
||
| const [updated] = await db | ||
| .update(apiKeys) | ||
| .set(updates) | ||
| .where( | ||
| and( | ||
| eq(apiKeys.id, input.id), | ||
| eq(apiKeys.userId, userId), | ||
| isNull(apiKeys.revokedAt), | ||
| ), | ||
| ) | ||
| .returning(); | ||
|
|
||
| if (!updated) { | ||
| throw new TRPCError({ | ||
| code: "NOT_FOUND", | ||
| message: "API key not found or already revoked", | ||
| }); | ||
| } | ||
|
|
||
| return { | ||
| id: updated.id, | ||
| name: updated.name, | ||
| keyPrefix: updated.keyPrefix, | ||
| defaultDeviceId: updated.defaultDeviceId, | ||
| }; | ||
| }), |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's find the apiKeys schema definition
fd -e ts -e tsx | head -20Repository: superset-sh/superset
Length of output: 968
🏁 Script executed:
# Search for apiKeys table definition
rg "apiKeys|api_keys" --type ts --type tsx -l | head -10Repository: superset-sh/superset
Length of output: 91
🏁 Script executed:
# Find the schema file
fd schema -type f | head -20Repository: superset-sh/superset
Length of output: 234
🏁 Script executed:
# Search for apiKeys table definition
rg "apiKeys" --no-heading -A 5 | head -100Repository: superset-sh/superset
Length of output: 6008
🏁 Script executed:
# Find schema-related files
find . -name "*.ts" -path "*/schema*" -o -name "schema.ts" | head -20Repository: superset-sh/superset
Length of output: 595
🏁 Script executed:
# Look for database schema definitions
find . -type f -name "*.ts" | xargs grep -l "export const apiKeys" 2>/dev/null | head -10Repository: superset-sh/superset
Length of output: 181
🏁 Script executed:
# Find the actual file location containing apiKeys
rg "export const apiKeys" -type ts | head -5Repository: superset-sh/superset
Length of output: 91
🏁 Script executed:
# Read the full api-keys.ts file to understand the context
cat -n packages/trpc/src/router/agent/api-keys.tsRepository: superset-sh/superset
Length of output: 6902
Add organization scoping to revoke and update mutations.
Both mutations only filter by userId without checking organizationId. A multi-org user could revoke or update API keys from another organization without switching context.
Extract organizationId from ctx.session.session.activeOrganizationId (following the pattern already used in the generate and list procedures), validate it exists, and add an eq(apiKeys.organizationId, organizationId) condition to both where clauses.
Suggested pattern (follow generate/list implementation)
revoke: protectedProcedure
.input(z.object({ id: z.string().uuid() }))
.mutation(async ({ ctx, input }) => {
+ const organizationId = ctx.session.session.activeOrganizationId;
+ if (!organizationId) {
+ throw new TRPCError({
+ code: "BAD_REQUEST",
+ message: "No active organization selected",
+ });
+ }
const userId = ctx.session.user.id;
const [revoked] = await db
.update(apiKeys)
.set({ revokedAt: new Date() })
.where(
and(
eq(apiKeys.id, input.id),
eq(apiKeys.userId, userId),
+ eq(apiKeys.organizationId, organizationId),
isNull(apiKeys.revokedAt),
),
)
- .returning({ id: apiKeys.id });
+ .returning({ id: apiKeys.id, revokedAt: apiKeys.revokedAt });
if (!revoked) {
throw new TRPCError({
code: "NOT_FOUND",
message: "API key not found or already revoked",
});
}
- return { success: true, revokedAt: new Date() };
+ return { success: true, revokedAt: revoked.revokedAt };
}),Apply the same pattern to the update mutation.
🤖 Prompt for AI Agents
In `@packages/trpc/src/router/agent/api-keys.ts` around lines 134 - 220, The
revoke and update mutations (revoke and update) currently only filter by userId
and can affect API keys across organizations; extract organizationId from
ctx.session.session.activeOrganizationId (same pattern as generate/list),
validate it exists (throw TRPCError BAD_REQUEST if missing), and add an
additional where clause eq(apiKeys.organizationId, organizationId) to both
db.update(...).where(...) calls so the queries include organization scoping;
ensure the validation and organizationId variable are used in both the revoke
and update procedures before running the DB update.
There was a problem hiding this comment.
Actionable comments posted: 7
🤖 Fix all issues with AI agents
In `@apps/api/package.json`:
- Line 16: The package.json dependency for `@modelcontextprotocol/sdk` points to a
non-existent version (^1.25.3); update the dependency entry for
"@modelcontextprotocol/sdk" to the latest published version (e.g., "1.25.2" or
"^1.25.2") in package.json and then run your package manager (npm install or
yarn install) to refresh lockfiles and ensure the project uses the valid SDK
release.
In `@apps/api/src/app/api/mcp/route.ts`:
- Around line 46-59: The per-request MCP server created via createMcpServer(ctx)
is not being closed, risking transport-level resource leaks; wrap the
connect/handleRequest logic in a try/finally where you await server.close() in
the finally to ensure cleanup even on error, guarding for server being defined
(e.g., if (server) await server.close()) and optionally catching/logging errors
from close so they don't mask the original exception; reference createMcpServer,
WebStandardStreamableHTTPServerTransport, server.connect,
transport.handleRequest, and server.close.
In `@apps/api/src/lib/mcp/tools.ts`:
- Around line 559-564: The ilike pattern currently interpolates raw
params.search into the SQL (used in the or(ilike(users.name,
`%${params.search}%`), ilike(users.email, `%${params.search}%`))) which allows
unescaped % and _ to be treated as wildcards; update the code to escape SQL
wildcard characters in params.search (escape '%' and '_' by prefixing with the
SQL escape char) before wrapping with surrounding '%' for contains-matching, and
use that escapedPattern in both ilike calls; reference the params.search
variable, the ilike(...) calls on users.name and users.email, and the or(...)
expression so you replace the raw interpolation with the escapedPattern.
- Around line 299-307: The search pattern built from params.search is vulnerable
to unintended SQL wildcard matches; before calling ilike(tasks.title,
`%${params.search}%`) and ilike(tasks.description, `%${params.search}%`) escape
SQL wildcard characters (%) and (_) and backslashes in params.search (e.g.
replace \ with \\, % with \% and _ with \_) then build the pattern as
`%${escapedSearch}%`, and ensure the query uses the same escape convention (or
an ESCAPE '\\' clause) so searchCondition and conditions receive the correctly
escaped ilike patterns.
In `@apps/desktop/VOICE_AGENT_PLAN.md`:
- Around line 203-225: Update the plan to document where and how WisprFlow API
keys are stored and surfaced to the user: specify using a secure mechanism
(e.g., Electron safeStorage or OS keychain) for persisting keys, add a UI entry
point for users to add/update the key, describe implementing
getStoredApiKey("wisprflow") and a companion setStoredApiKey("wisprflow") in the
main process with encryption and access controls, require client-side validation
on save (call WisprFlow auth/validate endpoint) and surface clear error messages
when transcribeAudio calls fail due to missing/invalid keys (include guidance to
prompt the user to configure the key).
- Around line 141-159: The observable's async run() has no cancellation and can
continue work after unsubscribe; create an AbortController at observable start,
pass controller.signal into transcribeAudio(...) and into query(...) (or into
any client call that supports an abort/signal), and return an unsubscribe
function that calls controller.abort(); in run(), catch abort errors (e.g.,
DOMException/AbortError) and suppress emitting emit.error for those cases so
cancellation doesn't surface as an error; reference observable, run,
transcribeAudio, query, emit, VOICE_AGENT_PROMPT, and workspaceTools when making
these changes.
- Around line 164-196: The current workspace tools (createWorktree,
listWorktrees, getCurrentContext) use a custom structure but must be migrated to
use the SDK helper tool() and return Promise<CallToolResult>; replace each
object with a call to tool(name, description, inputSchema, handler) (referencing
the existing identifiers createWorktree, listWorktrees, getCurrentContext to
find them), change the handler signature to async (input, ctx) =>
Promise<CallToolResult>, invoke your existing helpers (createWorkspace,
getAllWorkspaces, getActiveWorkspace) inside the handler, and return a
CallToolResult object with content as an array of { type: 'text' | 'resource' |
'image', textOrResource } and optional isError flag instead of plain objects;
ensure input schemas use z.object(...) as before when passed into tool().
♻️ Duplicate comments (21)
apps/mobile/screens/(authenticated)/index/HomeScreen.tsx (1)
19-28: Use TRPC RouterOutputs type instead of manually duplicating the OnlineDevice interface.This local interface may drift from the server schema (e.g., missing
createdAt). Import the canonical type:import type { RouterOutputs } from "@superset/trpc"; type OnlineDevice = RouterOutputs["device"]["listOnlineDevices"][number];apps/mobile/package.json (1)
35-35: Use tilde versioning for Expo SDK compatibility.As previously flagged, Expo SDK 54 documentation specifies
expo-crypto@~15.0.8. The caret (^) allows flexibility that may cause incompatibilities with Expo's bundled native modules.- "expo-crypto": "^15.0.8", + "expo-crypto": "~15.0.8",apps/desktop/src/renderer/routes/_authenticated/components/AgentHooks/hooks/useDevicePresence/useDevicePresence.ts (1)
22-24: Log heartbeat failures instead of silently ignoring them.As previously noted, suppressing errors entirely hides connectivity and auth issues. Per coding guidelines, errors should be logged with context at minimum.
- } catch { - // Heartbeat can fail when offline - ignore - } + } catch (error) { + console.warn("[useDevicePresence/heartbeat] failed", error); + }apps/api/src/lib/mcp/auth.ts (1)
64-74: Wrap API key validation to prevent unhandled exceptions.
validateApiKeyperforms database queries that can fail. If an exception occurs (connection error, query failure, etc.), it will propagate uncaught and result in a 500 error with no context. Add error handling with logging.Proposed fix
export async function authenticateMcpRequest( request: Request, ): Promise<McpContext | null> { const apiKey = request.headers.get("X-API-Key"); if (!apiKey) { return null; } - return validateApiKey(apiKey); + try { + return await validateApiKey(apiKey); + } catch (error) { + console.error("[mcp/auth] API key validation failed:", error); + return null; + } }apps/desktop/src/renderer/routes/_authenticated/components/AgentHooks/hooks/useCommandWatcher/tools/navigate-to-workspace.ts (1)
26-31: Guard against potentially nullablebranchduring name matching.Calling
ws.branch.toLowerCase()will throw a TypeError ifbranchis unexpectedly null or undefined at runtime. Use optional chaining for defensive coding.Proposed fix
const searchName = params.workspaceName.toLowerCase(); const found = workspaces.find( (ws) => ws.name.toLowerCase() === searchName || - ws.branch.toLowerCase() === searchName, + ws.branch?.toLowerCase() === searchName, );apps/desktop/src/renderer/routes/_authenticated/components/AgentHooks/hooks/useCommandWatcher/tools/switch-workspace.ts (1)
26-31: Guard against potentially nullablebranchduring name matching.Same issue as in navigate-to-workspace.ts: calling
ws.branch.toLowerCase()will throw ifbranchis null or undefined.Proposed fix
const searchName = params.workspaceName.toLowerCase(); const found = workspaces.find( (ws) => ws.name.toLowerCase() === searchName || - ws.branch.toLowerCase() === searchName, + ws.branch?.toLowerCase() === searchName, );packages/trpc/src/router/agent/agent.ts (1)
43-48: Add authorization guard before updating commands.Any authenticated org member can currently update any command in that organization. Add a check to ensure only the command creator (or the owning device/user) can update it.
Proposed fix
if (!existingCommand) { throw new TRPCError({ code: "NOT_FOUND", message: "Command not found", }); } + + if (existingCommand.userId !== ctx.session.user.id) { + throw new TRPCError({ + code: "FORBIDDEN", + message: "Not authorized to update this command", + }); + }packages/db/src/schema/schema.ts (3)
277-284: Include organizationId in the device_presence unique index.Current uniqueness is global per user/device, so the same device can’t appear in multiple orgs. This breaks multi‑org presence tracking.
🐛 Proposed fix
- uniqueIndex("device_presence_user_device_idx").on( - table.userId, - table.deviceId, - ), + uniqueIndex("device_presence_user_org_device_idx").on( + table.userId, + table.organizationId, + table.deviceId, + ),
303-308: Constrain targetDeviceType and add FK for parentCommandId.Using text for device type risks drift, and parent/child commands can become orphaned without an FK.
🛠️ Proposed fix
- targetDeviceType: text("target_device_type"), + targetDeviceType: deviceType("target_device_type"), tool: text().notNull(), params: jsonb().$type<Record<string, unknown>>(), - parentCommandId: uuid("parent_command_id"), + parentCommandId: uuid("parent_command_id").references( + () => agentCommands.id, + { onDelete: "set null" }, + ),
349-349: Store usageCount as an integer.Text counters break numeric ordering and arithmetic.
💡 Proposed fix
- usageCount: text("usage_count").notNull().default("0"), + usageCount: integer("usage_count").notNull().default(0),apps/desktop/src/renderer/routes/_authenticated/providers/CollectionsProvider/collections.ts (1)
216-231: Avoid fabricated txids and BigInt→Number coercion.Electric sync expects the DB transaction id;
Date.now()andNumber(...)can break ordering/collide. Use the server’s txid consistently.🛠️ Suggested direction (client-side)
- if (!changes.status) { - return { txid: Date.now() }; - } const result = await apiClient.agent.updateCommand.mutate({ id: original.id, - status: changes.status, + status: changes.status ?? original.status, claimedBy: changes.claimedBy ?? undefined, claimedAt: changes.claimedAt ?? undefined, result: changes.result ?? undefined, error: changes.error ?? undefined, executedAt: changes.executedAt ?? undefined, }); - return { txid: Number(result.txid) }; + return { txid: result.txid };This assumes the server returns a real txid even for no-op updates; please confirm/update server behavior accordingly.
packages/trpc/src/router/device/device.ts (1)
46-53: Align upsert conflict target with org-scoped uniqueness.If device presence is unique per (user, organization, device), the upsert target should include organizationId to prevent cross-org overwrites.
🐛 Proposed fix
- .onConflictDoUpdate({ - target: [devicePresence.userId, devicePresence.deviceId], + .onConflictDoUpdate({ + target: [ + devicePresence.userId, + devicePresence.organizationId, + devicePresence.deviceId, + ], set: { deviceName: input.deviceName, deviceType: input.deviceType, lastSeenAt: now, organizationId, }, })packages/trpc/src/router/agent/api-keys.ts (1)
134-205: Scope revoke/update by active organization.These mutations only filter by userId, allowing cross‑org key updates for multi‑org users. Add organizationId checks like generate/list.
🛠️ Proposed fix
revoke: protectedProcedure .input(z.object({ id: z.string().uuid() })) .mutation(async ({ ctx, input }) => { + const organizationId = ctx.session.session.activeOrganizationId; + if (!organizationId) { + throw new TRPCError({ + code: "BAD_REQUEST", + message: "No active organization selected", + }); + } const userId = ctx.session.user.id; const [revoked] = await db .update(apiKeys) .set({ revokedAt: new Date() }) .where( and( eq(apiKeys.id, input.id), eq(apiKeys.userId, userId), + eq(apiKeys.organizationId, organizationId), isNull(apiKeys.revokedAt), ), ) .returning({ id: apiKeys.id });update: protectedProcedure .input( z.object({ id: z.string().uuid(), name: z.string().min(1).max(100).optional(), defaultDeviceId: z.string().nullable().optional(), }), ) .mutation(async ({ ctx, input }) => { + const organizationId = ctx.session.session.activeOrganizationId; + if (!organizationId) { + throw new TRPCError({ + code: "BAD_REQUEST", + message: "No active organization selected", + }); + } const userId = ctx.session.user.id; ... const [updated] = await db .update(apiKeys) .set(updates) .where( and( eq(apiKeys.id, input.id), eq(apiKeys.userId, userId), + eq(apiKeys.organizationId, organizationId), isNull(apiKeys.revokedAt), ), ) .returning();apps/desktop/src/renderer/routes/_authenticated/components/AgentHooks/hooks/useCommandWatcher/tools/get-app-context.ts (1)
11-16: Strip query string before extracting workspace ID.If the hash contains a query string (e.g.,
#/workspace/abc123?tab=settings), the regex captures the query string as part of the ID, causing workspace lookups to fail.💡 Suggested fix
const hash = window.location.hash; const pathname = hash.startsWith("#") ? hash.slice(1) : hash; + const pathOnly = pathname.split("?")[0]; // Parse workspace ID from route if present (route is /workspace/$workspaceId) - const workspaceMatch = pathname.match(/\/workspace\/([^/]+)/); + const workspaceMatch = pathOnly.match(/\/workspace\/([^/]+)/);apps/api/src/lib/mcp/tools.ts (5)
22-22: RefactorregisterMcpToolsto accept a params object.Per coding guidelines, functions with 2+ parameters should accept a single object parameter for self-documentation and extensibility.
♻️ Suggested fix
-export function registerMcpTools(server: McpServer, ctx: McpContext) { +export function registerMcpTools({ + server, + ctx, +}: { server: McpServer; ctx: McpContext }) {Update call site in
apps/api/src/app/api/mcp/route.ts:- registerMcpTools(server, ctx); + registerMcpTools({ server, ctx });
92-118: Slug generation can yield empty or globally-duplicate slugs.If the title slugifies to empty (e.g., "!!!"), an empty slug is inserted. Also, the uniqueness check is org-scoped but the DB has a global unique constraint, which can still cause conflicts.
🐛 Suggested fix
- const slug = params.title + const baseSlug = params.title .toLowerCase() .replace(/[^a-z0-9]+/g, "-") .replace(/^-|-$/g, "") .slice(0, 50); + const slug = baseSlug.length > 0 ? baseSlug : "task"; // Check for existing slug and make unique const existingTasks = await db .select({ slug: tasks.slug }) .from(tasks) - .where( - and( - eq(tasks.organizationId, ctx.organizationId), - ilike(tasks.slug, `${slug}%`), - ), - ); + .where(ilike(tasks.slug, `${slug}%`));
213-229: Add validation to reject empty updates.All update fields are optional, allowing a request with only
taskIdto reach.set({})which Drizzle rejects with a database error. Return a validation error when no fields are provided.🐛 Suggested fix
if (params.estimate !== undefined) updateData.estimate = params.estimate; + if (Object.keys(updateData).length === 0) { + return { + content: [{ type: "text", text: "Error: No fields provided to update" }], + isError: true, + }; + } + const result = await dbWs.transaction(async (tx) => {
858-867: ForwardtaskIdto the device command.
taskIdis accepted by the tool schema but dropped before dispatch, so the desktop side can't associate the workspace with a task.🐛 Suggested fix
return executeOnDevice({ ctx, deviceId: targetDeviceId, tool: "create_workspace", params: { name: params.name, branchName: params.branchName, baseBranch: params.baseBranch, + taskId: params.taskId, }, });
1070-1074: Avoid overwriting completed/failed commands when timing out.A device can complete between the last poll and the timeout update. Guard the update to only apply to "in-flight" statuses.
🐛 Suggested fix
// Mark as timeout await db .update(agentCommands) .set({ status: "timeout" }) - .where(eq(agentCommands.id, cmd.id)); + .where( + and( + eq(agentCommands.id, cmd.id), + or( + eq(agentCommands.status, "pending"), + eq(agentCommands.status, "claimed"), + ), + ), + );apps/desktop/src/renderer/routes/_authenticated/components/AgentHooks/hooks/useCommandWatcher/tools/create-worktree.ts (1)
32-46: Guard against unresolvedprojectIdbefore calling the mutation.After the fallback logic,
projectIdcould still benullif the workspace'sprojectIdis null. Add a guard to fail fast with a clear error.💡 Suggested fix
// Fall back to the most recently used workspace's project if (!projectId) { const sorted = [...workspaces].sort( (a, b) => (b.lastOpenedAt ?? 0) - (a.lastOpenedAt ?? 0), ); projectId = sorted[0].projectId; } + + if (!projectId) { + return { success: false, error: "No project available for workspace creation" }; + } try { const result = await ctx.createWorktree.mutateAsync({apps/desktop/src/renderer/routes/_authenticated/components/AgentHooks/hooks/useCommandWatcher/tools/index.ts (1)
40-47: Replaceparsed.error.errorswithparsed.error.issuesfor Zod 4 compatibility.The repository uses Zod 4.3.5, which removed the
.errorsproperty. Line 45 will crash at runtime with "Cannot read property 'map' of undefined" when parameter validation fails.🐛 Fix
- error: `Invalid params: ${parsed.error.errors.map((e: { message: string }) => e.message).join(", ")}`, + error: `Invalid params: ${parsed.error.issues.map((e: { message: string }) => e.message).join(", ")}`,
🧹 Nitpick comments (11)
apps/mobile/screens/(authenticated)/index/HomeScreen.tsx (2)
49-53: Extract polling interval to a named constant and consider pausing when app is backgrounded.Per coding guidelines, magic numbers should be extracted. Additionally, on mobile, polling continues when the app is backgrounded, which wastes battery and network.
♻️ Suggested refactor
+const DEVICE_POLL_INTERVAL_MS = 10_000; + export function HomeScreen() { // ... useEffect(() => { fetchDevices(); - const interval = setInterval(fetchDevices, 10_000); + const interval = setInterval(fetchDevices, DEVICE_POLL_INTERVAL_MS); return () => clearInterval(interval); }, [fetchDevices]);For background handling, consider using React Native's
AppStateto pause polling when the app is not active:import { AppState } from "react-native"; useEffect(() => { const subscription = AppState.addEventListener("change", (state) => { if (state === "active") fetchDevices(); }); return () => subscription.remove(); }, [fetchDevices]);
55-61: Consider movingformatLastSeenoutside the component.This pure utility function has no dependencies on component state or props. Defining it at module scope avoids recreating it on every render and improves readability.
♻️ Suggested refactor
+const formatLastSeen = (date: Date) => { + const seconds = Math.floor((Date.now() - new Date(date).getTime()) / 1000); + if (seconds < 60) return `${seconds}s ago`; + const minutes = Math.floor(seconds / 60); + if (minutes < 60) return `${minutes}m ago`; + return new Date(date).toLocaleTimeString(); +}; + export function HomeScreen() { // ... - const formatLastSeen = (date: Date) => { - const seconds = Math.floor((Date.now() - new Date(date).getTime()) / 1000); - if (seconds < 60) return `${seconds}s ago`; - const minutes = Math.floor(seconds / 60); - if (minutes < 60) return `${minutes}m ago`; - return new Date(date).toLocaleTimeString(); - };apps/desktop/VOICE_AGENT_PLAN.md (1)
260-297: Add concurrency control to the design.The plan doesn't address what happens if the user presses the push-to-talk hotkey multiple times rapidly or while a previous command is still processing. Without concurrency guards, this could lead to:
- Multiple simultaneous recordings
- Overlapping agent queries
- Race conditions in state updates
- Wasted API calls
Consider adding to the voice store:
// Prevent concurrent operations if (isRecording || isProcessing) return;And document the expected behavior in the plan (e.g., "ignore new recordings while processing" vs. "cancel current and start new").
apps/desktop/src/renderer/routes/_authenticated/settings/devices/components/DevicesSettings/DevicesSettings.tsx (2)
31-49: Prevent overlapping poll requests.If a request takes longer than the interval, multiple in-flight calls can stack up and return out of order. Consider guarding against concurrent fetches.
🔧 Example guard to avoid concurrent polls
-import { useCallback, useEffect, useState } from "react"; +import { useCallback, useEffect, useRef, useState } from "react"; @@ export function DevicesSettings() { const [devices, setDevices] = useState<OnlineDevice[]>([]); const [loading, setLoading] = useState(true); const [error, setError] = useState<string | null>(null); + const isPollingRef = useRef(false); @@ const fetchDevices = useCallback(async () => { + if (isPollingRef.current) return; + isPollingRef.current = true; try { setLoading(true); setError(null); const result = await apiTrpcClient.device.listOnlineDevices.query(); setDevices(result); } catch (err) { setError(err instanceof Error ? err.message : "Failed to fetch devices"); } finally { + isPollingRef.current = false; setLoading(false); } }, []);
46-56: Extract magic numbers to named constants.This will improve readability and make future tuning safer. As per coding guidelines, please extract the refresh interval and time thresholds into module-level constants.
♻️ Example constants extraction
+const REFRESH_INTERVAL_MS = 10_000; +const MS_PER_SECOND = 1000; +const SECONDS_PER_MINUTE = 60; +const MINUTES_PER_HOUR = 60; @@ - const interval = setInterval(fetchDevices, 10_000); + const interval = setInterval(fetchDevices, REFRESH_INTERVAL_MS); @@ - const seconds = Math.floor((Date.now() - new Date(date).getTime()) / 1000); - if (seconds < 60) return `${seconds}s ago`; - const minutes = Math.floor(seconds / 60); - if (minutes < 60) return `${minutes}m ago`; + const seconds = Math.floor( + (Date.now() - new Date(date).getTime()) / MS_PER_SECOND, + ); + if (seconds < SECONDS_PER_MINUTE) return `${seconds}s ago`; + const minutes = Math.floor(seconds / SECONDS_PER_MINUTE); + if (minutes < MINUTES_PER_HOUR) return `${minutes}m ago`;apps/api/src/lib/mcp/auth.ts (1)
48-52: Log errors in fire-and-forget operations instead of swallowing silently.The
.catch(() => {})silently discards any errors from thelastUsedAtupdate. Per coding guidelines, errors should never be swallowed silently—at minimum log them with context.Proposed fix
// Update last used timestamp (fire and forget) db.update(apiKeys) .set({ lastUsedAt: new Date() }) .where(eq(apiKeys.id, found.id)) - .catch(() => {}); + .catch((error) => { + console.error("[mcp/auth] Failed to update lastUsedAt:", error); + });packages/trpc/src/router/agent/agent.ts (1)
54-68: Consider using a lookup object for conditional field assignments.Per coding guidelines, prefer lookup objects/maps over repeated conditionals. This is a minor improvement for readability.
Proposed refactor
- if (input.claimedBy !== undefined) { - updateData.claimedBy = input.claimedBy; - } - if (input.claimedAt !== undefined) { - updateData.claimedAt = input.claimedAt; - } - if (input.result !== undefined) { - updateData.result = input.result; - } - if (input.error !== undefined) { - updateData.error = input.error; - } - if (input.executedAt !== undefined) { - updateData.executedAt = input.executedAt; - } + const optionalFields = ['claimedBy', 'claimedAt', 'result', 'error', 'executedAt'] as const; + for (const field of optionalFields) { + if (input[field] !== undefined) { + updateData[field] = input[field]; + } + }apps/desktop/src/renderer/routes/_authenticated/components/AgentHooks/hooks/useCommandWatcher/tools/list-workspaces.ts (1)
16-19: Consider simplifying the type assertion.The double cast
as unknown as Record<string, unknown>[]loses type safety. SinceCommandResult.dataisRecord<string, unknown>, you can assign the array directly without the intermediate cast.💡 Suggested simplification
return { success: true, - data: { workspaces: workspaces as unknown as Record<string, unknown>[] }, + data: { workspaces }, };apps/desktop/src/renderer/routes/_authenticated/components/AgentHooks/hooks/useCommandWatcher/useCommandWatcher.ts (3)
15-15: Module-levelprocessingCommandsSet may cause issues with React Fast Refresh and concurrent features.Using a module-level
Setoutside of React's lifecycle can lead to stale state during Fast Refresh in development and potential issues with React's concurrent features. Consider using auseRefinside the hook instead.♻️ Suggested refactor
-const processingCommands = new Set<string>(); - export function useCommandWatcher() { + const processingCommands = useRef(new Set<string>()); const isEnabled = useFeatureFlagEnabled(FEATURE_FLAGS.AGENT_COMMANDS_ACCESS);Then update usages to
processingCommands.current.has(...),.add(...),.delete(...).
93-101: Extract magic delay value and add explanatory comment.The 100ms delays appear to be for sync propagation, but the purpose isn't documented. Consider extracting to a named constant with explanation.
💡 Suggested improvement
+// Allow Electric SQL sync to propagate status changes before continuing +const SYNC_PROPAGATION_DELAY_MS = 100; // In processCommand: - await new Promise((resolve) => setTimeout(resolve, 100)); + await new Promise((resolve) => setTimeout(resolve, SYNC_PROPAGATION_DELAY_MS));
164-166: Consider sequential command processing or documenting parallel behavior.
processCommandis called withoutawait, firing multiple commands in parallel. While theprocessingCommandsguard prevents re-entry, parallel execution could cause resource contention. If parallel is intentional, consider usingPromise.allwith a concurrency limit; otherwise, process sequentially.
| "dependencies": { | ||
| "@electric-sql/client": "https://pkg.pr.new/@electric-sql/client@3724", | ||
| "@linear/sdk": "^68.1.0", | ||
| "@modelcontextprotocol/sdk": "^1.25.3", |
There was a problem hiding this comment.
Verify the MCP SDK version is current and stable.The latest published version is 1.25.2, but you're specifying ^1.25.3. This version does not exist on npm yet. Update to the latest available version.
- "@modelcontextprotocol/sdk": "^1.25.3",
+ "@modelcontextprotocol/sdk": "^1.25.2",📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "@modelcontextprotocol/sdk": "^1.25.3", | |
| "@modelcontextprotocol/sdk": "^1.25.2", |
🤖 Prompt for AI Agents
In `@apps/api/package.json` at line 16, The package.json dependency for
`@modelcontextprotocol/sdk` points to a non-existent version (^1.25.3); update the
dependency entry for "@modelcontextprotocol/sdk" to the latest published version
(e.g., "1.25.2" or "^1.25.2") in package.json and then run your package manager
(npm install or yarn install) to refresh lockfiles and ensure the project uses
the valid SDK release.
| try { | ||
| // Create a fresh server instance for this request (stateless) | ||
| const server = createMcpServer(ctx); | ||
|
|
||
| // Create transport (stateless mode - no session management) | ||
| const transport = new WebStandardStreamableHTTPServerTransport(); | ||
|
|
||
| // Connect server to transport | ||
| await server.connect(transport); | ||
|
|
||
| // Handle the request and return the response | ||
| const response = await transport.handleRequest(request); | ||
|
|
||
| return response; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# First, let's examine the route.ts file to understand the full context
cat -n apps/api/src/app/api/mcp/route.ts | head -150Repository: superset-sh/superset
Length of output: 4342
🏁 Script executed:
#!/bin/bash
# Search for McpServer and transport cleanup patterns in the codebase
rg -n "(disconnect|close|cleanup)" apps/api/src --type ts --type tsx -C 2Repository: superset-sh/superset
Length of output: 91
🏁 Script executed:
#!/bin/bash
# Search for any existing resource cleanup patterns in API routes
rg -n "finally\s*{|\.close\(|\.disconnect\(|addEventListener.*cleanup" apps/api/src --type ts --type tsx | head -20Repository: superset-sh/superset
Length of output: 219
🏁 Script executed:
#!/bin/bash
# Check if there are other MCP-related files that might show cleanup patterns
fd -t f "mcp" apps/api/src --type f | head -20Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Search for disconnect/close patterns more simply
rg -n "disconnect|close" apps/api/src --type ts -C 2 | head -30Repository: superset-sh/superset
Length of output: 2319
🏁 Script executed:
#!/bin/bash
# Find all MCP-related files in the project
find apps/api -name "*mcp*" -type f 2>/dev/nullRepository: superset-sh/superset
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Check if node_modules has the SDK to inspect types
ls -la node_modules/@modelcontextprotocol/sdk/server/ 2>/dev/null | head -20Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Search for any usage of server.connect or transport in the codebase
rg -n "\.connect\(|\.handleRequest\(" apps/ --type ts -C 3 | head -40Repository: superset-sh/superset
Length of output: 1132
🏁 Script executed:
#!/bin/bash
# Check package.json for SDK version and any related docs
cat package.json | grep -A 5 -B 5 "modelcontextprotocol"Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Search for any test files that might show proper resource cleanup
find . -name "*.test.ts" -o -name "*.spec.ts" | xargs grep -l "McpServer\|WebStandardStreamableHTTPServerTransport" 2>/dev/null | head -5Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Check if there's any documentation or examples directory
find apps/api -type d -name "examples" -o -name "docs" | head -5Repository: superset-sh/superset
Length of output: 46
🏁 Script executed:
#!/bin/bash
# Try to find package.json at root
find . -maxdepth 2 -name "package.json" -type f | head -3Repository: superset-sh/superset
Length of output: 78
🏁 Script executed:
#!/bin/bash
# If we find package.json, check the MCP SDK version
head -100 package.json | grep -E "modelcontextprotocol|version"Repository: superset-sh/superset
Length of output: 84
🌐 Web query:
@modelcontextprotocol/sdk McpServer disconnect close cleanup resource management
💡 Result:
In the TypeScript MCP SDK (@modelcontextprotocol/sdk), the lifecycle pattern is generally:
- Connect:
await server.connect(transport)(e.g.,StdioServerTransport,SSEServerTransport,StreamableHTTPServerTransport). [1] - Close/cleanup: call
server.close()when the underlying connection is gone (e.g., stdin closes for stdio). [1]
Practical cleanup patterns
1) Stdio servers (process / stdin lifecycle)
Hook the process/stdio lifecycle and close the server when the stream ends (example shown in the repo issue): [1]
process.stdin.on("close", () => {
server.close();
});2) HTTP/SSE servers (per-connection resources)
Treat each HTTP/SSE connection as a resource you must explicitly clean up:
- remove the transport from your
sessionId -> transportmap on disconnect - ensure the transport/server close path is actually invoked so the HTTP response is terminated
There’s an open issue noting cases where the SSE transport’s close path may not be called (so res.end isn’t triggered), and suggesting close should be used for proper cleanup. [2]
3) Client-side “disconnect”
In practice, “disconnect” is handled by closing the connection and cleaning associated resources; many examples use a try/finally and call close() to ensure cleanup. [3]
If you tell me which transport you’re using (stdio, SSE, or Streamable HTTP) and whether you have one server per connection or a shared server with many transports, I can give an exact shutdown snippet for your architecture.
🌐 Web query:
WebStandardStreamableHTTPServerTransport MCP SDK close method HTTP server cleanup
💡 Result:
In the MCP TypeScript SDK, WebStandardStreamableHTTPServerTransport (and the Node wrapper StreamableHTTPServerTransport) does not own or create your HTTP server, so its close() cannot shut down your Express/Node/http server—you must close that yourself (e.g., keep the server = app.listen(...) handle and call server.close() on shutdown). (npmjs.com)
What close() does do is transport-level cleanup: it “closes the transport and all active connections” (i.e., tears down any active streaming/SSE connections and in-memory transport state). Also, start() is effectively a no-op for Streamable HTTP because connections are “managed per-request.” (app.unpkg.com)
Citations:
- 1: https://www.npmjs.com/package/%40modelcontextprotocol/sdk/v/1.10.2?utm_source=openai
- 2: https://app.unpkg.com/%40modelcontextprotocol/sdk%401.25.2/files/dist/esm/server/streamableHttp.d.ts
Add explicit server cleanup in finally block to prevent resource leaks.
The MCP server created per-request should be explicitly closed after handling. The @modelcontextprotocol/sdk documentation indicates that McpServer.close() performs necessary transport-level cleanup, including closing active connections and freeing in-memory state. Add a finally block to ensure cleanup happens regardless of success or error:
try {
const server = createMcpServer(ctx);
const transport = new WebStandardStreamableHTTPServerTransport();
await server.connect(transport);
const response = await transport.handleRequest(request);
return response;
} finally {
await server.close();
}🤖 Prompt for AI Agents
In `@apps/api/src/app/api/mcp/route.ts` around lines 46 - 59, The per-request MCP
server created via createMcpServer(ctx) is not being closed, risking
transport-level resource leaks; wrap the connect/handleRequest logic in a
try/finally where you await server.close() in the finally to ensure cleanup even
on error, guarding for server being defined (e.g., if (server) await
server.close()) and optionally catching/logging errors from close so they don't
mask the original exception; reference createMcpServer,
WebStandardStreamableHTTPServerTransport, server.connect,
transport.handleRequest, and server.close.
| if (params.search) { | ||
| const searchCondition = or( | ||
| ilike(tasks.title, `%${params.search}%`), | ||
| ilike(tasks.description, `%${params.search}%`), | ||
| ); | ||
| if (searchCondition) { | ||
| conditions.push(searchCondition); | ||
| } | ||
| } |
There was a problem hiding this comment.
Escape SQL wildcards in search pattern.
User input in params.search could contain % or _ characters, causing unexpected pattern matching behavior. Consider escaping these characters.
💡 Suggested fix
if (params.search) {
+ const escapedSearch = params.search.replace(/[%_]/g, '\\$&');
const searchCondition = or(
- ilike(tasks.title, `%${params.search}%`),
- ilike(tasks.description, `%${params.search}%`),
+ ilike(tasks.title, `%${escapedSearch}%`),
+ ilike(tasks.description, `%${escapedSearch}%`),
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (params.search) { | |
| const searchCondition = or( | |
| ilike(tasks.title, `%${params.search}%`), | |
| ilike(tasks.description, `%${params.search}%`), | |
| ); | |
| if (searchCondition) { | |
| conditions.push(searchCondition); | |
| } | |
| } | |
| if (params.search) { | |
| const escapedSearch = params.search.replace(/[%_]/g, '\\$&'); | |
| const searchCondition = or( | |
| ilike(tasks.title, `%${escapedSearch}%`), | |
| ilike(tasks.description, `%${escapedSearch}%`), | |
| ); | |
| if (searchCondition) { | |
| conditions.push(searchCondition); | |
| } | |
| } |
🤖 Prompt for AI Agents
In `@apps/api/src/lib/mcp/tools.ts` around lines 299 - 307, The search pattern
built from params.search is vulnerable to unintended SQL wildcard matches;
before calling ilike(tasks.title, `%${params.search}%`) and
ilike(tasks.description, `%${params.search}%`) escape SQL wildcard characters
(%) and (_) and backslashes in params.search (e.g. replace \ with \\, % with \%
and _ with \_) then build the pattern as `%${escapedSearch}%`, and ensure the
query uses the same escape convention (or an ESCAPE '\\' clause) so
searchCondition and conditions receive the correctly escaped ilike patterns.
| and( | ||
| ...conditions, | ||
| or( | ||
| ilike(users.name, `%${params.search}%`), | ||
| ilike(users.email, `%${params.search}%`), | ||
| ), |
There was a problem hiding this comment.
Escape SQL wildcards in search pattern.
Same issue as in list_tasks - user-provided search input should have % and _ escaped to prevent unexpected pattern matching.
🤖 Prompt for AI Agents
In `@apps/api/src/lib/mcp/tools.ts` around lines 559 - 564, The ilike pattern
currently interpolates raw params.search into the SQL (used in the
or(ilike(users.name, `%${params.search}%`), ilike(users.email,
`%${params.search}%`))) which allows unescaped % and _ to be treated as
wildcards; update the code to escape SQL wildcard characters in params.search
(escape '%' and '_' by prefixing with the SQL escape char) before wrapping with
surrounding '%' for contains-matching, and use that escapedPattern in both ilike
calls; reference the params.search variable, the ilike(...) calls on users.name
and users.email, and the or(...) expression so you replace the raw interpolation
with the escapedPattern.
| return observable((emit) => { | ||
| const run = async () => { | ||
| // 1. Transcribe | ||
| const transcription = await transcribeAudio(input.audioBase64); | ||
| emit.next({ type: "transcription", text: transcription }); | ||
|
|
||
| // 2. Run agent with custom tools | ||
| for await (const message of query({ | ||
| prompt: transcription, | ||
| options: { | ||
| systemPrompt: VOICE_AGENT_PROMPT, | ||
| tools: workspaceTools, // Custom tools defined below | ||
| } | ||
| })) { | ||
| emit.next({ type: "agent", message }); | ||
| } | ||
| }; | ||
| run().catch((err) => emit.error(err)); | ||
| }); |
There was a problem hiding this comment.
Add cancellation support to the observable subscription.
The async run() function lacks an abort mechanism. If the client unsubscribes mid-execution (e.g., user cancels or navigates away), the transcription and Claude agent query continue running in the background, potentially wasting API credits and resources.
♻️ Proposed fix with AbortController
.subscription(({ input }) => {
return observable((emit) => {
+ const abortController = new AbortController();
+
const run = async () => {
+ if (abortController.signal.aborted) return;
+
// 1. Transcribe
- const transcription = await transcribeAudio(input.audioBase64);
+ const transcription = await transcribeAudio(input.audioBase64, abortController.signal);
emit.next({ type: "transcription", text: transcription });
+ if (abortController.signal.aborted) return;
+
// 2. Run agent with custom tools
for await (const message of query({
prompt: transcription,
options: {
systemPrompt: VOICE_AGENT_PROMPT,
tools: workspaceTools,
+ signal: abortController.signal,
}
})) {
+ if (abortController.signal.aborted) break;
emit.next({ type: "agent", message });
}
};
run().catch((err) => emit.error(err));
+
+ // Cleanup on unsubscribe
+ return () => {
+ abortController.abort();
+ };
});
}),🤖 Prompt for AI Agents
In `@apps/desktop/VOICE_AGENT_PLAN.md` around lines 141 - 159, The observable's
async run() has no cancellation and can continue work after unsubscribe; create
an AbortController at observable start, pass controller.signal into
transcribeAudio(...) and into query(...) (or into any client call that supports
an abort/signal), and return an unsubscribe function that calls
controller.abort(); in run(), catch abort errors (e.g., DOMException/AbortError)
and suppress emitting emit.error for those cases so cancellation doesn't surface
as an error; reference observable, run, transcribeAudio, query, emit,
VOICE_AGENT_PROMPT, and workspaceTools when making these changes.
| const workspaceTools = { | ||
| createWorktree: { | ||
| description: "Create a new worktree/workspace for a feature or bug fix", | ||
| parameters: z.object({ | ||
| name: z.string().describe("Human-readable name for the workspace"), | ||
| branch: z.string().optional().describe("Git branch name (auto-generated if omitted)"), | ||
| baseBranch: z.string().optional().describe("Branch to base off of (default: main)"), | ||
| }), | ||
| execute: async ({ name, branch, baseBranch }) => { | ||
| // Call existing workspace creation logic | ||
| const result = await createWorkspace({ name, branch, baseBranch }); | ||
| return { success: true, workspaceId: result.id, branch: result.branch }; | ||
| }, | ||
| }, | ||
|
|
||
| listWorktrees: { | ||
| description: "List all current worktrees/workspaces", | ||
| parameters: z.object({}), | ||
| execute: async () => { | ||
| const workspaces = await getAllWorkspaces(); | ||
| return workspaces.map(w => ({ id: w.id, name: w.name, branch: w.branch })); | ||
| }, | ||
| }, | ||
|
|
||
| getCurrentContext: { | ||
| description: "Get info about the currently active workspace", | ||
| parameters: z.object({}), | ||
| execute: async () => { | ||
| const current = await getActiveWorkspace(); | ||
| return current ? { id: current.id, name: current.name, branch: current.branch } : null; | ||
| }, | ||
| }, | ||
| }; |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
@anthropic-ai/claude-agent-sdk tool definition format documentation
💡 Result:
The Claude Agent SDK’s tool definition format (for defining MCP tools) is documented in the Agent SDK reference – TypeScript page:
tool()helper (recommended format): you define a tool as
tool(name, description, inputSchema, handler) -> SdkMcpToolDefinitionwhereinputSchemais a Zod object shape andhandlerreturns aCallToolResult. [1][2]- Registering tools: pass an array of these tool definitions to
createSdkMcpServer({ tools: [...] }). [1][2] - If you meant the schemas for the SDK’s built-in tools (Read/Edit/Bash/WebSearch/etc.), see the same page’s “Tool Input Types” section, which lists the exported TypeScript interfaces/unions. [1][2]
Sources: Agent SDK reference – TypeScript (Claude Docs / API Docs). [1][2]
🌐 Web query:
@anthropic-ai/claude-agent-sdk tool() function signature CallToolResult type
💡 Result:
tool() (TypeScript) signature
function tool<Schema extends ZodRawShape>(
name: string,
description: string,
inputSchema: Schema,
handler: (
args: z.infer<ZodObject<Schema>>,
extra: unknown
) => Promise<CallToolResult>
): SdkMcpToolDefinition<Schema>[1]
CallToolResult type
type CallToolResult = {
content: Array<{
type: 'text' | 'image' | 'resource';
// Additional fields vary by type
}>;
isError?: boolean;
}[1]
Source: Anthropic Claude Agent SDK TypeScript reference docs. [1]
References:
[1] https://platform.claude.com/docs/en/agent-sdk/typescript
Rewrite tool definitions to use the official @anthropic-ai/claude-agent-sdk tool() helper and CallToolResult format.
The current custom tool structure with description, parameters, and execute fields is incompatible with the SDK API. The official format requires:
- Using the
tool(name, description, inputSchema, handler)function - Returning
Promise<CallToolResult>from handlers, not plain objects:{ content: Array<{ type: 'text' | 'image' | 'resource' }>, isError?: boolean }
Update all three tools (createWorktree, listWorktrees, getCurrentContext) to use the correct SDK API.
🤖 Prompt for AI Agents
In `@apps/desktop/VOICE_AGENT_PLAN.md` around lines 164 - 196, The current
workspace tools (createWorktree, listWorktrees, getCurrentContext) use a custom
structure but must be migrated to use the SDK helper tool() and return
Promise<CallToolResult>; replace each object with a call to tool(name,
description, inputSchema, handler) (referencing the existing identifiers
createWorktree, listWorktrees, getCurrentContext to find them), change the
handler signature to async (input, ctx) => Promise<CallToolResult>, invoke your
existing helpers (createWorkspace, getAllWorkspaces, getActiveWorkspace) inside
the handler, and return a CallToolResult object with content as an array of {
type: 'text' | 'resource' | 'image', textOrResource } and optional isError flag
instead of plain objects; ensure input schemas use z.object(...) as before when
passed into tool().
| ### 5. WisprFlow Client (main process) | ||
|
|
||
| **New:** `apps/desktop/src/main/lib/wisprflow.ts` | ||
|
|
||
| ```typescript | ||
| const WISPRFLOW_API = "https://api.wisprflow.ai/v1/transcribe"; | ||
|
|
||
| export async function transcribeAudio(audioBase64: string): Promise<string> { | ||
| const apiKey = await getStoredApiKey("wisprflow"); | ||
|
|
||
| const response = await fetch(WISPRFLOW_API, { | ||
| method: "POST", | ||
| headers: { | ||
| "Authorization": `Bearer ${apiKey}`, | ||
| "Content-Type": "application/json", | ||
| }, | ||
| body: JSON.stringify({ audio: audioBase64, format: "wav" }), | ||
| }); | ||
|
|
||
| const result = await response.json(); | ||
| return result.text; | ||
| } | ||
| ``` |
There was a problem hiding this comment.
🛠️ Refactor suggestion | 🟠 Major
Document API key management and storage strategy.
The plan references getStoredApiKey("wisprflow") but doesn't describe how WisprFlow API keys are securely stored, how users configure them, or where this functionality should be implemented. This is a critical security consideration that should be addressed in the plan.
Consider adding a section covering:
- Secure storage mechanism (e.g., Electron's safeStorage API)
- UI for API key configuration
- Key validation on save
- Error messaging when key is missing or invalid
🤖 Prompt for AI Agents
In `@apps/desktop/VOICE_AGENT_PLAN.md` around lines 203 - 225, Update the plan to
document where and how WisprFlow API keys are stored and surfaced to the user:
specify using a secure mechanism (e.g., Electron safeStorage or OS keychain) for
persisting keys, add a UI entry point for users to add/update the key, describe
implementing getStoredApiKey("wisprflow") and a companion
setStoredApiKey("wisprflow") in the main process with encryption and access
controls, require client-side validation on save (call WisprFlow auth/validate
endpoint) and surface clear error messages when transcribeAudio calls fail due
to missing/invalid keys (include guidance to prompt the user to configure the
key).
Summary
Add cloud MCP server that routes commands to desktop app via Electric SQL sync. This enables AI agents (like Claude Code) to control the Superset desktop app.
API Changes
/api/mcpendpoint with HTTP transportDesktop Changes
useCommandWatcherhook to execute commands from clouduseDevicePresencehook for heartbeat/online statusTools Available
Cloud-executed (immediate):
create_task,update_task,list_tasks,get_task,delete_tasklist_members,list_task_statuses,list_devicesDevice-routed (via Electric SQL):
list_workspaces,list_projects,get_app_contextcreate_workspace,switch_workspace,delete_workspacenavigate_to_workspaceTest plan
.mcp.jsonwith API key from settingsSummary by CodeRabbit
New Features
Infrastructure
✏️ Tip: You can customize this high-level summary in your review settings.