Skip to content

Add low hanging ui features#1304

Merged
Kitenite merged 2 commits into
mainfrom
kitenite/low-hanging-ui
Feb 8, 2026
Merged

Add low hanging ui features#1304
Kitenite merged 2 commits into
mainfrom
kitenite/low-hanging-ui

Conversation

@Kitenite
Copy link
Copy Markdown
Collaborator

@Kitenite Kitenite commented Feb 8, 2026

Description

Related Issues

Type of Change

  • Bug fix
  • New feature
  • Documentation
  • Refactor
  • Other (please describe):

Testing

Screenshots (if applicable)

Additional Notes

Summary by CodeRabbit

  • New Features

    • Added a Thinking toggle to enable/disable AI "thinking" per session.
    • Allow persisting session preferences (thinking max tokens and selected model).
    • Model selection changes are saved for the current session.
    • Live session statistics now show real-time token usage and context info.
  • Style

    • Minor formatting improvements across UI components.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 8, 2026

📝 Walkthrough

Walkthrough

A new thinking-toggle UI control and ThinkingToggle component were added; the UI now persists toggles and model selection via a new updateSessionConfig tRPC mutation. ChatSessionManager.updateAgentConfig was added to send updated agent registration (including optional maxThinkingTokens and model) to the proxy. ContextIndicator now reads live session stats.

Changes

Cohort / File(s) Summary
AI Chat Router
apps/desktop/src/lib/trpc/routers/ai-chat/index.ts
Added updateSessionConfig mutation that delegates to chatSessionManager.updateAgentConfig and returns { success: true }.
Session Manager
apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/session-manager.ts
Added ChatSessionManager.updateAgentConfig({ sessionId, maxThinkingTokens?, model? }) which builds/updates agent registration and POSTs to /v1/sessions/{sessionId}/agents; removes fields when null and throws on non-OK responses.
Chat UI / Thinking Toggle
apps/desktop/src/renderer/.../ChatInterface/ChatInterface.tsx, packages/ui/src/components/ai-elements/thinking-toggle.tsx
Introduced ThinkingToggle component and integrated it into ChatInterface; local state updates invoke updateSessionConfig to persist maxThinkingTokens and model selection via mutations.
Context Indicator
apps/desktop/src/renderer/.../ContextIndicator/ContextIndicator.tsx
Refactored to accept collections and modelId props and use useLiveQuery to derive token usage and limits from live session stats (replacing static mock data).
Agent Schema / Streams
apps/streams/src/claude-agent.ts
Added optional maxThinkingTokens and model fields to agent request handling; includes maxThinkingTokens in request options when provided and sets model fallback logic.
UI formatting tidy-ups
packages/ui/src/components/ai-elements/bash-tool.tsx, packages/ui/src/components/ai-elements/tool-call.tsx, packages/ui/src/components/ai-elements/tool-interrupted.tsx
Minor className formatting refactors (condensed cn() calls) with no behavioral changes.

Sequence Diagram(s)

sequenceDiagram
    participant User as User
    participant ChatUI as ChatInterface
    participant Router as tRPC Router
    participant SessionMgr as ChatSessionManager
    participant Proxy as Agent Proxy

    User->>ChatUI: toggle thinking / change model
    ChatUI->>ChatUI: update local state
    ChatUI->>Router: call updateSessionConfig(sessionId, maxThinkingTokens, model)
    Router->>SessionMgr: updateAgentConfig(sessionId, maxThinkingTokens, model)
    SessionMgr->>Proxy: POST /v1/sessions/{sessionId}/agents with updated registration
    Proxy-->>SessionMgr: response
    SessionMgr-->>Router: success / throws on error
    Router-->>ChatUI: mutation resolved
    ChatUI->>ChatUI: reflect persisted state
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

Poem

🐰 A toggle blinked bright, a thinker set free,
I nudged the session, sent tokens to be,
Model chosen, config sailed off on its quest,
Proxy returned calm — the agent at rest,
Hooray for small hops that make chats the best! ✨

🚥 Pre-merge checks | ❌ 3
❌ Failed checks (2 warnings, 1 inconclusive)
Check name Status Explanation Resolution
Description check ⚠️ Warning The PR description is entirely empty, containing only the template structure with no substantive content in any of the required sections. Provide a detailed description of the changes, list related issues, specify the type of change, describe testing performed, and include any additional context about the implementation.
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
Title check ❓ Inconclusive The title 'Add low hanging ui features' is vague and generic, using non-descriptive language that doesn't clearly convey what specific UI features are being added. Replace with a more specific title describing the primary changes, such as 'Add thinking toggle and model switcher UI controls' or 'Add agent configuration UI features'.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch kitenite/low-hanging-ui

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Fix all issues with AI agents
In
`@apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/session-manager.ts`:
- Around line 312-358: The updateAgentConfig method currently performs fetch
without try/catch; wrap the network call and response check in a try/catch
inside updateAgentConfig to mirror startSession/restoreSession: use
buildProxyHeaders and fetch as before, on any thrown error or non-ok response
call this.emit('error', { sessionId, action: 'updateAgentConfig', error }) and
log a contextual message (including sessionId and maxThinkingTokens), then
rethrow a new Error with contextual details (or rethrow the caught error) so
callers get consistent, informative errors; keep existing registration logic
(this.provider.getAgentRegistration) and response handling (res.ok check) but
move them inside the try block.
- Around line 327-330: updateAgentConfig rebuilds the agent registration without
paneId/tabId, causing routing context loss; add paneId and tabId fields to the
ActiveSession interface and ensure they are populated where sessions are created
(e.g., startSession and restoreSession) so updateAgentConfig can pass
session.paneId and session.tabId into getAgentRegistration when reconstructing
the registration; update references in updateAgentConfig, startSession,
restoreSession, and any session creation logic to set and read these new
ActiveSession properties.

In
`@apps/desktop/src/renderer/screens/main/components/WorkspaceView/ContentView/TabsContent/TabView/ChatPane/ChatInterface/ChatInterface.tsx`:
- Around line 52-54: The thinkingEnabled local state is never initialized from
the server-side session/agent config so the toggle can be out of sync; update
ChatInterface to derive initial value from the session/agent config (check
session?.config?.maxThinkingTokens or agent?.config?.maxThinkingTokens) and call
setThinkingEnabled(Boolean(maxThinkingTokens)) in a useEffect that runs when the
session/agent changes, and ensure the toggle handler uses updateConfig.mutate to
set maxThinkingTokens to a default positive number when enabling and to 0 when
disabling so UI and backend stay synchronized.
- Around line 222-231: The handler handleThinkingToggle uses a hardcoded magic
number 10000 and calls updateConfig.mutate without error handling; extract the
magic number into a named constant (e.g., DEFAULT_MAX_THINKING_TOKENS in
constants.ts or at module top) and update the call to use that constant, and add
error handling for the mutation (use updateConfig.mutate or its equivalent with
an onError callback) to roll back or reset setThinkingEnabled when the mutation
fails and at minimum log the error so local state doesn't remain out of sync
with the server.
🧹 Nitpick comments (4)
apps/streams/src/claude-agent.ts (1)

35-35: Consider adding bounds validation for maxThinkingTokens.

The schema accepts any number (including zero, negative, or extremely large values). The Claude SDK likely expects a positive integer within a reasonable range. Adding .int().positive() or .min(1024).max(128000) would catch invalid values early at the API boundary rather than letting them propagate to the SDK.

As per coding guidelines, "Validate external API data as untrusted by handling missing fields, unknown enums, and unexpected shapes with tolerant parsing and explicit fallbacks."

Proposed schema tightening
-	maxThinkingTokens: z.number().optional(),
+	maxThinkingTokens: z.number().int().positive().optional(),
apps/desktop/src/lib/trpc/routers/ai-chat/index.ts (1)

91-104: Silent success when session is not found could mislead callers.

When updateAgentConfig is called with a non-existent sessionId, the session manager logs a warning and returns without error, so the mutation returns { success: true }. The frontend won't know the config wasn't actually applied. This is consistent with how interrupt handles missing sessions, but worth noting since the user may toggle thinking and believe it took effect on a stale/inactive session.

Consider returning a { success: false } or throwing a TRPCError with NOT_FOUND code if the session doesn't exist, so the UI can react accordingly.

apps/desktop/src/renderer/screens/main/components/WorkspaceView/ContentView/TabsContent/TabView/ChatPane/ChatInterface/components/ContextIndicator/ContextIndicator.tsx (1)

14-14: MAX_TOKENS is hardcoded but should vary by model.

modelId is available as a prop but unused for determining the context window size. Different Claude models have different token limits (e.g., 200K for Sonnet, but other models may differ). Consider deriving maxTokens from modelId via a lookup, or at minimum adding a comment explaining why 200K is a safe default.

packages/ui/src/components/ai-elements/thinking-toggle.tsx (1)

14-16: Unnecessary "onToggle" in the Omit.

onToggle is not a standard prop on Button (or the underlying HTMLButtonElement), so omitting it has no effect. Only "onClick" needs to be omitted here.

Proposed fix
 export type ThinkingToggleProps = Omit<
 	ComponentProps<typeof Button>,
-	"onClick" | "onToggle"
+	"onClick"
 > & {

Comment on lines +312 to +358
async updateAgentConfig({
sessionId,
maxThinkingTokens,
}: {
sessionId: string;
maxThinkingTokens?: number | null;
}): Promise<void> {
const session = this.sessions.get(sessionId);
if (!session) {
console.warn(
`[chat/session] Session ${sessionId} not found for config update`,
);
return;
}

const registration = this.provider.getAgentRegistration({
sessionId,
cwd: session.cwd,
});

if (maxThinkingTokens !== undefined) {
if (maxThinkingTokens === null) {
delete registration.bodyTemplate.maxThinkingTokens;
} else {
registration.bodyTemplate.maxThinkingTokens = maxThinkingTokens;
}
}

const headers = buildProxyHeaders();
const res = await fetch(`${PROXY_URL}/v1/sessions/${sessionId}/agents`, {
method: "POST",
headers,
body: JSON.stringify({ agents: [registration] }),
});
if (!res.ok) {
throw new Error(
`POST /v1/sessions/${sessionId}/agents failed: ${res.status}`,
);
}

console.log(
`[chat/session] Updated agent config for ${sessionId}`,
maxThinkingTokens !== undefined
? `maxThinkingTokens=${maxThinkingTokens}`
: "",
);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing error handling around fetch — inconsistent with other methods.

startSession and restoreSession wrap their fetch calls in try/catch and emit error events. updateAgentConfig lets fetch exceptions propagate as raw Errors, which will surface as INTERNAL_SERVER_ERROR in tRPC without contextual information. Consider wrapping in try/catch for consistency and better error messages.

As per coding guidelines, "Never swallow errors silently; at minimum log errors with context before rethrowing or handling them explicitly."

Proposed error handling
 	async updateAgentConfig({
 		sessionId,
 		maxThinkingTokens,
 	}: {
 		sessionId: string;
 		maxThinkingTokens?: number | null;
 	}): Promise<void> {
 		const session = this.sessions.get(sessionId);
 		if (!session) {
 			console.warn(
 				`[chat/session] Session ${sessionId} not found for config update`,
 			);
 			return;
 		}
 
 		const registration = this.provider.getAgentRegistration({
 			sessionId,
 			cwd: session.cwd,
 		});
 
 		if (maxThinkingTokens !== undefined) {
 			if (maxThinkingTokens === null) {
 				delete registration.bodyTemplate.maxThinkingTokens;
 			} else {
 				registration.bodyTemplate.maxThinkingTokens = maxThinkingTokens;
 			}
 		}
 
-		const headers = buildProxyHeaders();
-		const res = await fetch(`${PROXY_URL}/v1/sessions/${sessionId}/agents`, {
-			method: "POST",
-			headers,
-			body: JSON.stringify({ agents: [registration] }),
-		});
-		if (!res.ok) {
-			throw new Error(
-				`POST /v1/sessions/${sessionId}/agents failed: ${res.status}`,
-			);
-		}
-
-		console.log(
-			`[chat/session] Updated agent config for ${sessionId}`,
-			maxThinkingTokens !== undefined
-				? `maxThinkingTokens=${maxThinkingTokens}`
-				: "",
-		);
+		try {
+			const headers = buildProxyHeaders();
+			const res = await fetch(`${PROXY_URL}/v1/sessions/${sessionId}/agents`, {
+				method: "POST",
+				headers,
+				body: JSON.stringify({ agents: [registration] }),
+			});
+			if (!res.ok) {
+				throw new Error(
+					`POST /v1/sessions/${sessionId}/agents failed: ${res.status}`,
+				);
+			}
+
+			console.log(
+				`[chat/session] Updated agent config for ${sessionId}`,
+				maxThinkingTokens !== undefined
+					? `maxThinkingTokens=${maxThinkingTokens}`
+					: "",
+			);
+		} catch (error) {
+			const message = error instanceof Error ? error.message : String(error);
+			console.error(`[chat/session] Failed to update agent config:`, message);
+			throw error;
+		}
 	}
🤖 Prompt for AI Agents
In
`@apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/session-manager.ts`
around lines 312 - 358, The updateAgentConfig method currently performs fetch
without try/catch; wrap the network call and response check in a try/catch
inside updateAgentConfig to mirror startSession/restoreSession: use
buildProxyHeaders and fetch as before, on any thrown error or non-ok response
call this.emit('error', { sessionId, action: 'updateAgentConfig', error }) and
log a contextual message (including sessionId and maxThinkingTokens), then
rethrow a new Error with contextual details (or rethrow the caught error) so
callers get consistent, informative errors; keep existing registration logic
(this.provider.getAgentRegistration) and response handling (res.ok check) but
move them inside the try block.

Comment on lines +327 to +330
const registration = this.provider.getAgentRegistration({
sessionId,
cwd: session.cwd,
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# First, let's look at the specific file and lines mentioned
cat -n apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/session-manager.ts | head -400 | tail -100

Repository: superset-sh/superset

Length of output: 2982


🏁 Script executed:

#!/bin/bash
# Find the updateAgentConfig method to understand context
rg -n 'updateAgentConfig' apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/session-manager.ts -B 10 -A 15

Repository: superset-sh/superset

Length of output: 733


🏁 Script executed:

#!/bin/bash
# Find startSession and restoreSession to see how they call getAgentRegistration
rg -n 'startSession|restoreSession' apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/session-manager.ts -B 5 -A 10

Repository: superset-sh/superset

Length of output: 617


🏁 Script executed:

#!/bin/bash
# Find the ActiveSession interface definition
rg -n 'interface ActiveSession' apps/desktop -A 20

Repository: superset-sh/superset

Length of output: 2236


🏁 Script executed:

#!/bin/bash
# Find the AgentProvider interface to understand getAgentRegistration signature
rg -n 'getAgentRegistration' apps/desktop -B 2 -A 10 --type=ts | head -100

Repository: superset-sh/superset

Length of output: 6883


updateAgentConfig rebuilds registration without paneId/tabId — routing context will be lost.

startSession and restoreSession pass paneId and tabId to getAgentRegistration, but updateAgentConfig omits them. Since ActiveSession does not store these values, they cannot be retrieved when the registration is rebuilt and POSTed back, causing any pane/tab routing information to be lost.

Store paneId and tabId in the ActiveSession interface so they're available when updating the agent config.

🤖 Prompt for AI Agents
In
`@apps/desktop/src/lib/trpc/routers/ai-chat/utils/session-manager/session-manager.ts`
around lines 327 - 330, updateAgentConfig rebuilds the agent registration
without paneId/tabId, causing routing context loss; add paneId and tabId fields
to the ActiveSession interface and ensure they are populated where sessions are
created (e.g., startSession and restoreSession) so updateAgentConfig can pass
session.paneId and session.tabId into getAgentRegistration when reconstructing
the registration; update references in updateAgentConfig, startSession,
restoreSession, and any session creation logic to set and read these new
ActiveSession properties.

Comment on lines +52 to +54
const [thinkingEnabled, setThinkingEnabled] = useState(false);

const updateConfig = electronTrpc.aiChat.updateSessionConfig.useMutation();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

thinkingEnabled is not initialized from the server-side session config.

If a session already has maxThinkingTokens set (e.g., after a restore), the toggle will show as disabled even though extended thinking is active on the backend. Consider initializing from the existing session or agent config to keep UI in sync.

🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/screens/main/components/WorkspaceView/ContentView/TabsContent/TabView/ChatPane/ChatInterface/ChatInterface.tsx`
around lines 52 - 54, The thinkingEnabled local state is never initialized from
the server-side session/agent config so the toggle can be out of sync; update
ChatInterface to derive initial value from the session/agent config (check
session?.config?.maxThinkingTokens or agent?.config?.maxThinkingTokens) and call
setThinkingEnabled(Boolean(maxThinkingTokens)) in a useEffect that runs when the
session/agent changes, and ensure the toggle handler uses updateConfig.mutate to
set maxThinkingTokens to a default positive number when enabling and to 0 when
disabling so UI and backend stay synchronized.

Comment on lines +222 to +231
const handleThinkingToggle = useCallback(
(enabled: boolean) => {
setThinkingEnabled(enabled);
updateConfig.mutate({
sessionId,
maxThinkingTokens: enabled ? 10000 : null,
});
},
[sessionId, updateConfig],
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Magic number 10000 and missing error handling on config mutation.

Two issues in this handler:

  1. Magic number: 10000 should be extracted to a named constant (e.g., DEFAULT_MAX_THINKING_TOKENS in constants.ts). As per coding guidelines, hardcoded magic numbers should be extracted to named constants at module top.

  2. No error handling: If updateConfig.mutate fails, the local thinkingEnabled state will be out of sync with the server. Add onError to roll back the optimistic state or at minimum log the failure — the coding guidelines require errors not be swallowed silently.

Proposed fix

In constants.ts (or top of file):

const DEFAULT_MAX_THINKING_TOKENS = 10_000;

Then in the handler:

 const handleThinkingToggle = useCallback(
   (enabled: boolean) => {
     setThinkingEnabled(enabled);
     updateConfig.mutate(
       {
         sessionId,
-        maxThinkingTokens: enabled ? 10000 : null,
+        maxThinkingTokens: enabled ? DEFAULT_MAX_THINKING_TOKENS : null,
+      },
+      {
+        onError: (err) => {
+          console.error("[chat/thinking] Failed to update config:", err);
+          setThinkingEnabled(!enabled);
+        },
       },
     );
   },
   [sessionId, updateConfig],
 );
🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/screens/main/components/WorkspaceView/ContentView/TabsContent/TabView/ChatPane/ChatInterface/ChatInterface.tsx`
around lines 222 - 231, The handler handleThinkingToggle uses a hardcoded magic
number 10000 and calls updateConfig.mutate without error handling; extract the
magic number into a named constant (e.g., DEFAULT_MAX_THINKING_TOKENS in
constants.ts or at module top) and update the call to use that constant, and add
error handling for the mutation (use updateConfig.mutate or its equivalent with
an onError callback) to roll back or reset setThinkingEnabled when the mutation
fails and at minimum log the error so local state doesn't remain out of sync
with the server.

@github-actions
Copy link
Copy Markdown
Contributor

github-actions Bot commented Feb 8, 2026

🧹 Preview Cleanup Complete

The following preview resources have been cleaned up:

  • ✅ Neon database branch
  • ✅ Electric Fly.io app

Thank you for your contribution! 🎉

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In
`@apps/desktop/src/renderer/screens/main/components/WorkspaceView/ContentView/TabsContent/TabView/ChatPane/ChatInterface/ChatInterface.tsx`:
- Around line 233-242: handleModelSelect currently performs an optimistic update
by calling setSelectedModel(model) before updateConfig.mutate and lacks error
rollback; change it to capture the previous model (e.g., const previous =
selectedModel) before setting the new model, pass an onError handler to
updateConfig.mutate that resets selectedModel back to previous and logs the
error, and optionally handle onSettled/onSuccess for consistency; ensure the
onError closure references the correct previous value and that the callback
still depends on sessionId and updateConfig.
🧹 Nitpick comments (1)
apps/streams/src/claude-agent.ts (1)

35-36: Add tighter Zod constraints on maxThinkingTokens and model.

These fields come from external callers and are passed directly to the Claude SDK. maxThinkingTokens currently accepts negatives, zero, and floats; model accepts an empty string. Both would likely produce opaque downstream API errors.

Proposed fix
-	maxThinkingTokens: z.number().optional(),
-	model: z.string().optional(),
+	maxThinkingTokens: z.number().int().positive().optional(),
+	model: z.string().min(1).optional(),

As per coding guidelines, "Validate external API data as untrusted by handling missing fields, unknown enums, and unexpected shapes with tolerant parsing and explicit fallbacks".

Comment on lines +233 to +242
const handleModelSelect = useCallback(
(model: ModelOption) => {
setSelectedModel(model);
updateConfig.mutate({
sessionId,
model: model.id,
});
},
[sessionId, updateConfig],
);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

handleModelSelect has no error rollback — optimistic state can drift.

Same pattern as handleThinkingToggle: if the mutation fails, selectedModel will reflect a model the backend didn't accept. Add onError to revert and log.

Proposed fix
 const handleModelSelect = useCallback(
   (model: ModelOption) => {
+    const previousModel = selectedModel;
     setSelectedModel(model);
-    updateConfig.mutate({
-      sessionId,
-      model: model.id,
-    });
+    updateConfig.mutate(
+      {
+        sessionId,
+        model: model.id,
+      },
+      {
+        onError: (err) => {
+          console.error("[chat/model] Failed to update model:", err);
+          setSelectedModel(previousModel);
+        },
+      },
+    );
   },
-  [sessionId, updateConfig],
+  [sessionId, updateConfig, selectedModel],
 );
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const handleModelSelect = useCallback(
(model: ModelOption) => {
setSelectedModel(model);
updateConfig.mutate({
sessionId,
model: model.id,
});
},
[sessionId, updateConfig],
);
const handleModelSelect = useCallback(
(model: ModelOption) => {
const previousModel = selectedModel;
setSelectedModel(model);
updateConfig.mutate(
{
sessionId,
model: model.id,
},
{
onError: (err) => {
console.error("[chat/model] Failed to update model:", err);
setSelectedModel(previousModel);
},
},
);
},
[sessionId, updateConfig, selectedModel],
);
🤖 Prompt for AI Agents
In
`@apps/desktop/src/renderer/screens/main/components/WorkspaceView/ContentView/TabsContent/TabView/ChatPane/ChatInterface/ChatInterface.tsx`
around lines 233 - 242, handleModelSelect currently performs an optimistic
update by calling setSelectedModel(model) before updateConfig.mutate and lacks
error rollback; change it to capture the previous model (e.g., const previous =
selectedModel) before setting the new model, pass an onError handler to
updateConfig.mutate that resets selectedModel back to previous and logs the
error, and optionally handle onSettled/onSuccess for consistency; ensure the
onError closure references the correct previous value and that the callback
still depends on sessionId and updateConfig.

@Kitenite Kitenite merged commit 33c7a04 into main Feb 8, 2026
14 checks passed
@Kitenite Kitenite deleted the kitenite/low-hanging-ui branch February 8, 2026 11:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant