fix(chat): cut display polling to 4fps and restore query cache defaults#3562
Conversation
Memory leak and CPU spiral root-caused to `staleTime: 0, gcTime: 0` + 60fps polling: React Query can't dedupe or GC anything, and the render path churns allocations every 16ms. Restoring the React Query defaults (5min gcTime) fixes the leak. Server poll rate is independent of perceived stream smoothness — StreamingMessageText already reveals text client-side at 60fps from whatever buffer the server delivers. 4fps polling keeps that buffer fed with plenty of headroom. Also removes the `isRunning` invalidation effect — redundant when the query is polling. Builds on and supersedes #3170 by @thepathmakerz, which diagnosed the same root cause. This version takes the subtractive path (-21 lines) instead of adaptive polling (+36). Closes #3049
📝 WalkthroughWalkthroughChanges reduce the chat display polling frequency from 60fps to 4fps, remove redundant cache invalidation logic, and adjust React Query cache configuration. These modifications address excessive refetching and memory churn that contributed to a renderer process memory leak. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Greptile SummaryThis PR fixes a memory leak and CPU spiral caused by Key changes:
Confidence Score: 5/5Safe to merge — targeted, minimal fix with no regressions introduced. The two changes (restoring gcTime and lowering fps) are both correct and complementary. The utils cleanup is accurate: removed where no longer needed, retained where still used for mutations. The deleted useEffect was genuinely redundant given the polling regime. No logic is restructured and no new surface area is introduced. The only note is a pre-existing minor inconsistency in the toRefetchIntervalMs fallback path that does not affect normal operation. No files require special attention.
|
| Filename | Overview |
|---|---|
| apps/desktop/src/renderer/routes/_authenticated/_dashboard/v2-workspace/$workspaceId/hooks/usePaneRegistry/components/ChatPane/hooks/useWorkspaceChatDisplay/useWorkspaceChatDisplay.ts | v2 workspace hook: fps default 60→4, staleTime/gcTime removed, redundant isRunning invalidation useEffect deleted, and now-unused utils import cleaned up — all correct. |
| packages/chat/src/client/hooks/use-chat-display/use-chat-display.ts | Chat package hook: fps default 60→4, staleTime/gcTime removed; utils is correctly retained as it is still needed for all command mutations (sendMessage, stop, abort, respondTo*). |
Sequence Diagram
sequenceDiagram
participant UI as React Component
participant Hook as useChatDisplay (fps=4)
participant RQ as React Query Cache (gcTime=5min)
participant Server as tRPC Server
Note over UI,Server: Before fix: gcTime=0, fps=60 → 60 alloc+discard/sec
Note over UI,Server: After fix: gcTime=5min, fps=4 → stable cache, 250ms poll
loop Every 250ms (4fps)
Hook->>RQ: refetchInterval fires
RQ->>Server: GET getDisplayState
Server-->>RQ: response cached (gcTime=5min)
RQ-->>Hook: data (staleTime=0, refetch queued)
Hook-->>UI: updated displayState + messages
end
UI->>Hook: sendMessage()
Hook->>Server: mutation via useMutation
Server-->>Hook: ack
Note over Hook,RQ: Next 250ms poll picks up new state (no manual invalidation needed)
Comments Outside Diff (1)
-
apps/desktop/src/renderer/routes/_authenticated/_dashboard/v2-workspace/$workspaceId/hooks/usePaneRegistry/components/ChatPane/hooks/useWorkspaceChatDisplay/useWorkspaceChatDisplay.ts, line 13-16 (link)toRefetchIntervalMsfallback still targets 60fpsThis is pre-existing code, but now that the default has moved to 4fps, the invalid-input fallback (
!Number.isFinite(fps) || fps <= 0 → 16ms) is conspicuously inconsistent. A caller who accidentally passesfps = 0orfps = Infinitywill silently receive 60fps polling (16ms) — the exact rate this PR is moving away from — rather than falling back to the new default of 4fps.Consider aligning the fallback with the new intent:
Or simply default to 250ms directly. This wouldn't affect normal usage (the default
fps = 4is always valid), but it closes the gap for misconfigured callers.Prompt To Fix With AI
This is a comment left during a code review. Path: apps/desktop/src/renderer/routes/_authenticated/_dashboard/v2-workspace/$workspaceId/hooks/usePaneRegistry/components/ChatPane/hooks/useWorkspaceChatDisplay/useWorkspaceChatDisplay.ts Line: 13-16 Comment: **`toRefetchIntervalMs` fallback still targets 60fps** This is pre-existing code, but now that the default has moved to 4fps, the invalid-input fallback (`!Number.isFinite(fps) || fps <= 0 → 16ms`) is conspicuously inconsistent. A caller who accidentally passes `fps = 0` or `fps = Infinity` will silently receive 60fps polling (16ms) — the exact rate this PR is moving away from — rather than falling back to the new default of 4fps. Consider aligning the fallback with the new intent: Or simply default to 250ms directly. This wouldn't affect normal usage (the default `fps = 4` is always valid), but it closes the gap for misconfigured callers. How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix All With AI
This is a comment left during a code review.
Path: apps/desktop/src/renderer/routes/_authenticated/_dashboard/v2-workspace/$workspaceId/hooks/usePaneRegistry/components/ChatPane/hooks/useWorkspaceChatDisplay/useWorkspaceChatDisplay.ts
Line: 13-16
Comment:
**`toRefetchIntervalMs` fallback still targets 60fps**
This is pre-existing code, but now that the default has moved to 4fps, the invalid-input fallback (`!Number.isFinite(fps) || fps <= 0 → 16ms`) is conspicuously inconsistent. A caller who accidentally passes `fps = 0` or `fps = Infinity` will silently receive 60fps polling (16ms) — the exact rate this PR is moving away from — rather than falling back to the new default of 4fps.
Consider aligning the fallback with the new intent:
```suggestion
function toRefetchIntervalMs(fps: number): number {
if (!Number.isFinite(fps) || fps <= 0) return Math.floor(1000 / 4);
return Math.max(16, Math.floor(1000 / fps));
}
```
Or simply default to 250ms directly. This wouldn't affect normal usage (the default `fps = 4` is always valid), but it closes the gap for misconfigured callers.
How can I resolve this? If you propose a fix, please make it concise.Reviews (1): Last reviewed commit: "fix(chat): cut display polling to 4fps a..." | Re-trigger Greptile
There was a problem hiding this comment.
🧹 Nitpick comments (2)
packages/chat/src/client/hooks/use-chat-display/use-chat-display.ts (1)
28-31: Nit: invalid-fps fallback still returns a 60fps interval.Now that the intended default is 4fps, the invalid/≤0 branch returning
Math.floor(1000 / 60)(≈16 ms) is surprising — a malformed input ends up polling ~15× faster than an unspecified one. Consider aligning the fallback with the new default (e.g.,1000 / 4 = 250) or extracting a sharedDEFAULT_FPSconstant used by both the destructure default and this fallback. Low priority; pre-existing behavior.♻️ Suggested tweak
+const DEFAULT_FPS = 4; + function toRefetchIntervalMs(fps: number): number { - if (!Number.isFinite(fps) || fps <= 0) return Math.floor(1000 / 60); + if (!Number.isFinite(fps) || fps <= 0) return Math.floor(1000 / DEFAULT_FPS); return Math.max(16, Math.floor(1000 / fps)); }And reuse
DEFAULT_FPSin the destructure at line 116.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@packages/chat/src/client/hooks/use-chat-display/use-chat-display.ts` around lines 28 - 31, The fallback for invalid/≤0 fps in toRefetchIntervalMs currently returns Math.floor(1000/60) which contradicts the new intended default of 4fps; introduce a shared DEFAULT_FPS constant (value 4) and use it both in the destructure default (where fps is set) and inside toRefetchIntervalMs so the invalid-fps branch returns 1000 / DEFAULT_FPS (or Math.floor/Math.max as currently applied) to keep behavior consistent across the module.apps/desktop/src/renderer/routes/_authenticated/_dashboard/v2-workspace/$workspaceId/hooks/usePaneRegistry/components/ChatPane/hooks/useWorkspaceChatDisplay/useWorkspaceChatDisplay.ts (1)
13-16: Optional: de-duplicate with the package hook.This file now mirrors
packages/chat/src/client/hooks/use-chat-display/use-chat-display.tsquite closely —toRefetchIntervalMs,findLastUserMessageIndex,findLatestAssistantErrorMessage,withoutActiveTurnAssistantHistory,hasFileOrImagePart,countFileMessages, andgetLegacyImagePayloadare essentially duplicated, differing only in theDisplayStateOutput/ListMessagesOutputrouter source. Not blocking for this PR (out of scope and would widen the diff), but worth a follow-up to hoist the pure helpers into a shared module so future polling/GC tweaks don't have to be applied in two places.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@apps/desktop/src/renderer/routes/_authenticated/_dashboard/v2-workspace/`$workspaceId/hooks/usePaneRegistry/components/ChatPane/hooks/useWorkspaceChatDisplay/useWorkspaceChatDisplay.ts around lines 13 - 16, Multiple pure helper functions (toRefetchIntervalMs, findLastUserMessageIndex, findLatestAssistantErrorMessage, withoutActiveTurnAssistantHistory, hasFileOrImagePart, countFileMessages, getLegacyImagePayload) are duplicated here and in the package hook; extract these helpers into a shared module and import them from both places. Create a new utilities file that exports those functions with generic or shared types (make parameter types accept the common shape used by DisplayStateOutput/ListMessagesOutput), move the implementations there, update this file and packages/chat hook to import the helpers instead of redefining them, and run type checks to ensure the signatures match both call sites.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In
`@apps/desktop/src/renderer/routes/_authenticated/_dashboard/v2-workspace/`$workspaceId/hooks/usePaneRegistry/components/ChatPane/hooks/useWorkspaceChatDisplay/useWorkspaceChatDisplay.ts:
- Around line 13-16: Multiple pure helper functions (toRefetchIntervalMs,
findLastUserMessageIndex, findLatestAssistantErrorMessage,
withoutActiveTurnAssistantHistory, hasFileOrImagePart, countFileMessages,
getLegacyImagePayload) are duplicated here and in the package hook; extract
these helpers into a shared module and import them from both places. Create a
new utilities file that exports those functions with generic or shared types
(make parameter types accept the common shape used by
DisplayStateOutput/ListMessagesOutput), move the implementations there, update
this file and packages/chat hook to import the helpers instead of redefining
them, and run type checks to ensure the signatures match both call sites.
In `@packages/chat/src/client/hooks/use-chat-display/use-chat-display.ts`:
- Around line 28-31: The fallback for invalid/≤0 fps in toRefetchIntervalMs
currently returns Math.floor(1000/60) which contradicts the new intended default
of 4fps; introduce a shared DEFAULT_FPS constant (value 4) and use it both in
the destructure default (where fps is set) and inside toRefetchIntervalMs so the
invalid-fps branch returns 1000 / DEFAULT_FPS (or Math.floor/Math.max as
currently applied) to keep behavior consistent across the module.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8769eb7b-1eb0-40ee-802a-b052dca221f0
📒 Files selected for processing (2)
apps/desktop/src/renderer/routes/_authenticated/_dashboard/v2-workspace/$workspaceId/hooks/usePaneRegistry/components/ChatPane/hooks/useWorkspaceChatDisplay/useWorkspaceChatDisplay.tspackages/chat/src/client/hooks/use-chat-display/use-chat-display.ts
🧹 Preview Cleanup CompleteThe following preview resources have been cleaned up:
Thank you for your contribution! 🎉 |
…ts (superset-sh#3562) Memory leak and CPU spiral root-caused to `staleTime: 0, gcTime: 0` + 60fps polling: React Query can't dedupe or GC anything, and the render path churns allocations every 16ms. Restoring the React Query defaults (5min gcTime) fixes the leak. Server poll rate is independent of perceived stream smoothness — StreamingMessageText already reveals text client-side at 60fps from whatever buffer the server delivers. 4fps polling keeps that buffer fed with plenty of headroom. Also removes the `isRunning` invalidation effect — redundant when the query is polling. Builds on and supersedes superset-sh#3170 by @thepathmakerz, which diagnosed the same root cause. This version takes the subtractive path (-21 lines) instead of adaptive polling (+36). Closes superset-sh#3049
* feat(chat): render subagent activity inline as collapsible tool wrapper
Remove SubagentExecutionMessage from the bottom-pinned section so subagent
tool calls render inline within AssistantMessage via the existing
SubagentToolCall collapsible component, consistent with all other tool calls.
* fix(chat): add gradient fade-out and bottom padding to input footer
Removes top padding from ChatInputDropZone so content is flush with the
bottom of the scroll area, and adds a CSS pseudo-element gradient that
fades the conversation content into the background above the input bar.
* feat(ui): redesign ToolInput/ToolOutput/ToolHeader — compact, monospaced
- ToolInput/ToolOutput: replace CodeBlock with plain <pre>, remove p-4
padding, switch to bg-muted/30 backgrounds, rename labels to
"Input"/"Output", apply font-mono throughout
- ToolHeader: reduce px-2.5 → px-1, add rounded-b-md, swap chevron
direction on hover (right=collapsed, down=expanded), add open prop
- Tool: add font-mono to outer Collapsible wrapper
* fix(ui): stop scroll anchor when clicking tool call triggers
Expanding a tool call was causing the scroll container to jump. Detect
clicks on [data-tool-trigger] elements and call stopScroll() so the
stick-to-bottom behaviour doesn't fight the user interaction.
* feat(ui): add ToolCallRow and refactor tool components
Introduces a shared ToolCallRow component that encapsulates the common
collapsible row pattern: icon/chevron, ShimmerLabel title, muted
description, status slot, left-border content area, and an optional
headerExtra element for out-of-trigger action buttons.
Refactors BashTool, WebSearchTool, WebFetchTool, and FileDiffTool to use
ToolCallRow, removing duplicated hover/chevron/collapsible logic from each.
All tools now have consistent: font-mono, px-1 header padding, rounded-b-md
on hover, ml-2.5 border-l content alignment, and chevron-right/down hover
behaviour.
* feat(desktop): refactor tool call components to use shared ToolCallRow
Migrates GenericToolCall, SupersetToolCall, SubagentToolCall, and
ReadOnlyToolCall to use the new ToolCallRow component, eliminating
per-component chevron/hover/collapsible boilerplate.
Also updates ToolCallBlock dispatch: web_search tool variants without
parsed results now fall through to GenericToolCall with a globe icon
instead of WebSearchTool (which requires results to be expandable).
Renames "Subagent" label to "Agent" with the agent type as a muted
description.
* feat(chat): replace inline question UI with footer overlay
- Add QuestionInputOverlay component: numbered options, "Something else"
free-text row with pencil icon, Skip button, cross-fade to submit on type
- Refactor AskUserQuestionToolCall to plain ToolCallRow (no inline answer UI)
- Wire pendingQuestion/handleQuestionResponse/stopActiveResponse to footer
in both ChatPane variants instead of ChatMessageList
- Remove PendingQuestionMessage from ChatMessageList and clean up its props
* feat(ui): replace tool call header indicators with braille spinner and left-side icons
- Add BrailleSpinner component (braille chars, amber, matches sidenav style)
- Show spinner in icon slot while pending, red X on error, icon on complete
- Remove right-side status slot (no spinner, checkmark, or X on the right)
- Remove title shimmer animation
* feat(chat): hide Question tool row while active, show with description after answered/interrupted
- Thread isStreaming through ToolCallBlock to AskUserQuestionToolCall
- Return null only when isPending && still streaming (overlay is active)
- Show collapsed row with question text as description once answered or interrupted
* feat(chat): improve tool call UX and styling
- Align tool call icon with chat text using -mx-1 negative margin
- Remove overflow-hidden from MessageContent to avoid clipping
- Hide description in tool call header when expanded
- Show query field as plain text with Query/Response labels instead of raw JSON
- Add subtle focus-visible ring to collapsible trigger button
- Adjust content padding (pl-3 py-1) to align with heading text
- Use "Type your answer..." placeholder when question has no options
* fix(chat): keep answered question messages visible during active turn
Extract hasAnsweredQuestionToolCall to a shared utility and use it in
withoutActiveTurnAssistantHistory / getVisibleMessages so that assistant
messages containing an answered ask_user_question are kept visible in the
message list while a session is still running. Previously all assistant
messages from the active turn were hidden, causing questions and their
answers to disappear from chat history until the turn completed.
* fix(chat): refactor question tool call — answer bubble, skip badge, plain-string fallback
Replace the inline Q&A markdown block with a right-aligned answer bubble
(styled like a user message) shown after the question is answered, and a
"Question skipped" badge when the result carries no answers. Add a fallback
for backends that return a plain string result (result.text / result.answer)
rather than a structured answers map. Remove the isPending spinner from the
ToolCallRow since the overlay in the footer already indicates active state.
* fix(chat): overlay freezes in place on submit, skip sends answer, scroll on question changes
- Fire onRespond without awaiting so the overlay never shows a separate
"Waiting for response..." state. The overlay stays frozen (same size and
content) until pendingQuestion updates from the server, at which point
React remounts the component via key={pendingQuestion.questionId}.
- Highlight the chosen option and replace its number badge with a spinner;
show a spinner on the pencil icon when the answer came from the text input.
- Skip now sends "skip" as the answer instead of aborting the agent, so the
LLM can continue. The X button still stops the session.
- Fix isQuestionSubmitting hardcoded to false — now passes questionResponsePending.
- Add footerScrollTrigger so the chat scrolls to the bottom whenever the
question overlay appears, updates, or disappears.
* feat(chat): question overlay max-height with scrollable options and pinned header/footer
* fix(chat): stabilize useFocusPromptOnPane effect dependency
* feat(chat): show question tool call with status and collapsible answer
Rewrites AskUserQuestionToolCall to use the shared ToolCallRow component.
- Supports both ask_user (singular question/options) and ask_user_question
(array of questions) tool schemas
- Shows AWAITING RESPONSE / ANSWERED / CANCELLED inline status description
- Expands to reveal the question text + submitted answer when answered
- Extracts answer from result.content "User answered: <x>" format
- ToolCallRow now uses cursor-text when the row has no expandable content
* fix(chat): keep question tool calls visible during active assistant turn
getVisibleMessages() filtered out all assistant messages while a turn was
in progress. Added hasPendingQuestionToolCall() to also pass through any
assistant message that contains an unanswered question tool call, so the
"AWAITING RESPONSE" tool call row remains visible in the message list.
* feat(chat): optimistically hide question overlay on answer submit
Tracks the most recently answered question ID in ChatPaneInterface and
passes it to ChatUploadFooter so the overlay disappears immediately on
submit without waiting for the server round-trip. Also threads
pendingQuestion and answeredQuestionId down to ChatMessageList to suppress
the ThinkingMessage spinner while a question is awaiting a response.
* feat(chat): scroll to bottom on message send, question arrival, and answer
Adds a ScrollAnchor component inside the Conversation (StickToBottom)
context that handles three cases:
- isAwaitingAssistant becomes true: re-pins scroll so Thinking and the
streaming response are always visible after sending any message
- pendingQuestion.questionId changes: scrolls to bottom when a new
question arrives so the overlay doesn't cover streaming content
- answeredQuestionId changes (10ms delay): the overlay hide causes the
footer to shrink and the scroll container to grow; the library
interprets the resulting scrollTop clamp as "user scrolled up" via a
1ms setTimeout, so we run after it with a 10ms delay to restore the pin
* feat(chat): show cancelled status when question is aborted
- Question tool call now shows CANCELLED status in the header instead of
nothing when aborted (output-error state or Mastracode isError: true)
- Error result content is no longer mistaken for an answer, fixing the
ANSWERED status shown on revisit after an abort
- Expanded view shows the question text and an "Aborted by the user"
label with a red CircleX icon
- INTERRUPTED / Response stopped footer is suppressed when the
interruption was caused by an aborted question
* fix(chat): keep bottom-pinned scroll when expanding a tool call
When the chat is pinned to the bottom and the user opens a collapsible
tool call, skip stopScroll() so stick-to-bottom's resize handler
auto-scrolls to reveal the expanded content instead of hiding it behind
the prompt input.
* fix(chat): exclude scrollbar column from input footer gradient
* fix(chat): stop scroll jump when expanding any tool call
Remove the overly-broad "scroll to bottom if last trigger" heuristic that
used a DOM query to find the last [data-tool-trigger] in the container.
This was wrong — it would fire even when there were messages below the
tool call being expanded.
Now ConversationContent simply unpins from bottom on any tool trigger
click, preventing the resize handler from jumping the scroll position.
Nothing more.
* feat(chat): always use ask_user tool for questions in Superset
Two-pronged approach to ensure the LLM never asks questions as plain text
(which bypasses the question overlay) and always uses the ask_user tool:
1. AGENTS.md — adds a project-level override rule that loads into the
mastracode system prompt for any session in the Superset workspace.
2. host-service — writes a managed ~/.mastracode/AGENTS.md with the same
rule, applied globally to every workspace opened in the desktop app.
Uses a managed-by marker to avoid overwriting user-authored files.
Root cause: the default mastracode tool guidance says "Don't use this for
simple yes/no — just ask in your text response." These rules override that
by being appended to the system prompt after the base instructions.
* feat(chat): pending question drives workspace nav status and native notification
- When ask_user tool fires, emit PendingQuestion lifecycle event from the
harness subscriber (same pipeline as PermissionRequest/Start/Stop)
- useAgentHookListener maps PendingQuestion → "permission" pane status,
showing the orange dot in the workspace nav immediately, even when the
tab is not focused
- NotificationManager plays sound and shows "Awaiting Response" native
toast for PendingQuestion events with visibility suppression
- Cancel tooltip added to QuestionInputOverlay X button
- Cancelled question tool calls now show question text + "Aborted by the
user" immediately on stop, without requiring a page reload
- isInterrupted prop threaded through MessagePartsRenderer → ToolCallBlock
→ AskUserQuestionToolCall so pending questions show CANCELLED status
when the run is interrupted
- INTERRUPTED badge is always shown alongside cancelled question state
* fix(chat): clear orange dot on answer submit, focus prompt on dismiss, no green dot when tab is focused
- Clear pane status to idle immediately when user submits a question answer
- Focus prompt textarea when question overlay dismisses (rAF to let overlay unmount and browser focus settle)
- Fix Stop event idle/review determination: read URL from hash (not pathname, which is always the file path in hash-routed Electron app), and add focusedPaneIds as a reliable fallback so panes the user is actively interacting with don't receive a spurious green dot
- Remove MarkdownToggleContent in favour of always-on MessageResponse for subagent output
* feat(chat): render subagent task prompt with markdown via MessageResponse
Render the subagent task text through MessageResponse so markdown
formatting (lists, bold, code spans) is applied consistently with
the response text below it.
* fix(chat): scale down headings and fix list layout in subagent output
Headings were rendering at full browser size (text-2xl/3xl) inside the
compact xs subagent block. Override h1-h6 to text-sm/xs with tighter
margins, and remove the top margin on first-child paragraphs inside list
items to fix ordered list numbers appearing on a separate line.
* feat(chat): render read file tool output with syntax-highlighted code viewer
- ReadOnlyToolCall fetches file content directly via tRPC filesystem.readFile
(same path as the file pane) instead of parsing MCP tool output, eliminating
metadata artifacts like line-number prefixes and byte-count headers
- Uses shared detectLanguage() for syntax highlighting language detection
- Renders with CodeBlock (Shiki) with a filename + line-range header styled
like table headers (bg-muted/50), showLineNumbers, and colorize=false for
plain white text
- Adds colorize prop to CodeBlock to suppress syntax colors while keeping
line numbers at reduced opacity
- Passes workspaceId/workspaceCwd from ToolCallBlock to ReadOnlyToolCall
- Fixes withoutActiveTurnAssistantHistory to preserve completed prior-phase
assistant messages (e.g. read-file before a question answer) by keeping
messages that have a stopReason and a different id from currentMessage,
preventing tool calls from disappearing after answering a question
* feat(chat): improve tool call error and task_write UX
- ToolCallRow: replace XIcon with "ERROR" label + XCircleIcon in description slot on error
- SupersetToolCall: add subtitle prop, render output content via MessageResponse instead of raw JSON
- TaskWriteToolCall: new component for task_write — "Update Tasks" title with ListTodoIcon and semantic description (task count + status breakdown)
* fix(chat): add vertical padding to read file tool content area
* feat(chat): add LspInspectToolCall with ActivityIcon and file subtitle
* fix(chat): handle mastra_workspace_lsp_inspect tool name alias
* fix(chat): use FileSearchIcon for LSP Inspect tool call
* feat(chat): show input/output content in LSP Inspect tool call
* fix(chat): use SearchCheckIcon for LSP Inspect tool call
* fix(chat): extract TOOL_CALL_MD_CLASSNAME for global compact markdown in tool calls
Adds inline code (text-xs) fix alongside existing heading overrides.
SubagentToolCall and SupersetToolCall now share the same constant so
future patches only need to happen in one place.
* feat(chat): improve subagent tool call display and share read-file component
- Filter empty messages (step-start/source-only) from chat history
- Add expandable content with subtitles to subagent inner tool calls (Read, List Files, Search, Write, Edit, Web)
- Extract shared ReadFileTool component to packages/ui for reuse across main and subagent tool calls
- Subagent read tool now shows styled CodeBlock with syntax highlighting, matching main agent display
- Thread workspaceId/workspaceCwd/onOpenFileInPane through SubagentToolCall → SubagentInnerToolCall so subagent read calls show the open-in-pane button
* fix(chat): forward workspace props to ReadOnlyToolCall in MessagePartsRenderer
workspaceId and workspaceCwd were available in MessagePartsRenderer but not
forwarded to ReadOnlyToolCall, silently disabling the disk-read feature for
read_file tool calls rendered via that path.
* fix(chat): remove dead code from MessageList
- Remove interruptedByAbortedQuestion which was computed but never
referenced in JSX or logic
- Move hasRenderableParts to after imports (was inserted between them)
- Remove unused getToolName, normalizeToolName, ToolPart imports
- Add comment explaining the runtime-only "error" part type cast
* fix(chat): fix type safety, path normalization, and memoize in SubagentInnerToolCall
- Replace as never with as BundledLanguage for language prop type safety
- Replace naive path concatenation with normalizeWorkspaceFilePath to handle
./, ../, file:// and workspace boundary validation (matches ReadOnlyToolCall)
- Rename shadowed normalized variable to resolvedPath in openInPane closure
- Memoize parseReadFileResult call to avoid re-parsing on every render
* fix(chat): add stale time and loading state to ReadOnlyToolCall file query
- Add staleTime: Infinity so completed read-file tool calls don't refetch
on remount (prevents IPC burst when scrolling long conversations)
- Show a spinner row while the disk read is in flight instead of flashing
the raw ToolInput/ToolOutput view
* feat(chat): replace text input with Tiptap editor for slash commands and file mentions
- Add TiptapPromptEditor with ProseMirror-based rich text input
- Slash command chips (/command) as inline atom nodes, insertable anywhere in message
- File mention chips (@path) anchored to cursor position via virtual float
- SlashCommandMenu width matches prompt input via --radix-popover-trigger-width
- Selecting a command inserts a chip node instead of immediately submitting
- Popover closes on editor blur, reopens on focus
- Tab no longer auto-selects commands (only Enter selects)
- serializeEditorToText serializes chip nodes to /name and @path for submission
* feat(chat): add skill preload — /command chips trigger skill tool calls before LLM
- Add SkillToolCall component (ZapIcon, Skill(name) title, success/error state)
- Register SkillToolCall in ToolCallBlock for tool names 'skill' and 'load_skill'
- In ChatPaneInterface.handleSend: extract custom command chip names from content,
strip leading / from message text, pass names as metadata.skills to sendMessage
- Add skills?: string[] to sendMessageInput metadata schema (zod.ts)
- Pass preloadSkills to harness.sendMessage in service.ts
- Add ChatSendMessageInput.metadata.skills type field
- Add docs/skill-preload-feature.md with implementation state and setup instructions
Requires superset-sh/mastra#9 for harness.sendMessage preloadSkills support
and .claude/commands/ being included in skillPaths.
* feat(ui): update shared AI element components for chat UX
- braille-spinner: improve animation timing
- code-block: add copy button and syntax highlight tweaks
- message: simplify prose class handling
- prompt-input: add focusShortcutText prop support
- tool-call-row: tighten collapsible layout and spacing
- input-group: support rounded-full variant
- globals.css: add chat-specific scrollbar and prose overrides
* fix(chat): improve tool call display components
- AskUserQuestionToolCall: redesign option layout with better button styling
- SupersetToolCall: render markdown content in tool output
- SubagentToolCall/SubagentInnerToolCall: tighten display, fix edge cases
- TaskWriteToolCall: simplify status rendering
- ReadOnlyToolCall: add workspace prop forwarding and memoize file query
- QuestionInputOverlay: improve option button layout
- MessageList: remove unused prop
* fix(chat): update message list and subagent execution display
- ChatMessageList: update subagent message grouping and rendering
- SubagentExecutionMessage: improve tool call display during subagent runs
- messageListHelpers: refine pending/streaming message detection
- use-chat-display: minor hook cleanup
- screens/main ChatPaneInterface: propagate workspace props
* fix(chat): suppress empty assistant message wrappers
When an assistant message has no renderable content (e.g. redacted_thinking
or unrecognized AI SDK step markers), return null instead of rendering
empty Message/MessageContent divs that cause blank gaps between messages.
* feat(chat): show styled "Not Configured" state for LSP inspect when LSP is absent
Detect the "LSP is not configured for this workspace" error and surface it
as a red "Not Configured" badge in the tool call row rather than triggering
the generic error styling.
* feat(chat): universal "not configured" warning on tool call rows
Detect "not configured" errors in getGenericToolCallState and surface
them as a filled amber warning triangle with a "Not configured" tooltip
in the ToolCallRow status area. File name description is preserved.
GenericToolCall passes isNotConfigured through so the treatment applies
to all tool calls, not just LSP Inspect.
* fix(chat): move not-configured warning icon inline after description
Show the outlined amber TriangleAlertIcon next to the file name in the
description area instead of in the right-side status slot.
* feat(chat): clickable file names on file-related tool call rows
Replace the standalone open-in-pane icon button with a hover-underline
treatment directly on the filename. Applies to Read, Check file, Write,
Edit, Delete, and Smart Edit tool call rows.
Extracts a shared ClickableFilePath component (span[role=button]) that
nests safely inside CollapsibleTrigger without invalid nested-button HTML.
* feat(ui): add ShowCode component with expand/collapse, copy, and startLine support
Adds a new ShowCode component that unifies all code display surfaces — tool
call file views and markdown code fences — into a single block with:
- Filename/language header with optional clickable file path and line range
- Expand/collapse toggle (appears when content exceeds ~15 lines)
- Copy and open-in-pane action buttons in the header
- startLine offset for partial-file display (line numbers count from the
correct offset rather than always starting at 1)
- Language fallback in highlightCode: unknown Shiki languages silently
retry as "text" instead of throwing
* refactor: replace legacy syntax highlighters with ShowCode
- ReadFileTool: swap inline CodeBlock + duplicated header/button JSX for
a single <ShowCode> — removes the fragile [&>div>div]:max-h-[300px]
deep selector and the duplicated open-in-pane button pattern flagged in
code review
- MarkdownRenderer/CodeBlock (desktop): replace react-syntax-highlighter
(Prism) with ShowCode, aligning markdown code fences with the shared
Shiki-based highlighter used throughout the chat UI
* fix(chat): address PR review feedback
- Move useMemo hooks before early returns in AskUserQuestionToolCall and
SubagentInnerToolCall to fix React Rules of Hooks violations
- Use hasFileContent (content !== undefined) guard in ReadOnlyToolCall
so empty files render in the code viewer instead of falling back
- Tighten @mention regex to require word-boundary before @ so email
addresses and decorators are not rewritten as file mentions on round-trip
- Add trigger to ScrollAnchor useEffect deps so footerScrollTrigger bumps
actually retrigger the scroll effect
- Add pendingQuestion to bumpFooterScroll useEffect deps in ChatPaneInterface
so the chat scrolls when the question overlay appears or disappears
- Roll back optimistic answeredQuestionId and pane status if
respondToQuestion RPC fails in legacy ChatPaneInterface
- Guard Shiki highlightCode fallback so it does not recurse infinitely
when language === "text" already
- Add aria-label to icon-only buttons (expand/collapse, open, copy) in ShowCode
- Remove dead _interruptedByAbortedQuestion useMemo from legacy ChatMessageList
* fix(chat): address second round of PR review feedback
- Add hasPendingQuestionToolCall to workspace path messageListHelpers so
assistant messages with active ask_user calls stay visible during a run
- Reset QuestionInputOverlay state (customText, submittedLabel) when the
question prop changes identity, not just on mount
- Fix Enter/Tab in file-mention mode only consuming the event when a file
is actually selected; falls through to normal submit when results are empty
- Keep isError indicator visible in ToolCallRow status slot when the row is
expanded (previously the error icon disappeared on open)
* fix(chat): address third round of PR review feedback
- Fix colorize=false fading first code token when line numbers are
disabled: add a shiki-line-number class to gutter spans and target
that class instead of span:first-child in the CSS selector
- Add e.preventDefault() to Space key handler in ClickableFilePath
so activating via keyboard does not also scroll the container
- Fix file mention round-trip for paths containing spaces: serializer
now emits @"path with spaces" and parser handles both quoted and
unquoted forms
- Add null guard in FileMentionNode so malformed/pasted content with
a missing path attr does not crash rendering
* fix(chat): address fourth round of PR review feedback
- Fix ReadOnlyToolCall lineRange: disk read always returns the whole
file so always display 1–N (trimming trailing newline before counting)
- Fix Shiki fallback to render escaped plain text instead of empty
strings when codeToHtml fails for the "text" language itself
- Trim trailing newline before computing lineCount in ShowCode so files
ending with \n do not trigger isOverflowing one line early
* chore: formatting and cleanup
* fix(chat): close mention popup before falling through Enter when results empty
* fix(chat): address fifth round of PR review feedback
- Re-add inputValue to SlashCommandPreviewPopover anchor effect deps so
the virtual anchor re-measures when typing shifts the chip's position
- Clamp slash menu selectedIndex in onUpdate when filtered results shrink,
matching the existing mention-menu clamping behavior
- Return true (consume event) when Enter/Tab closes empty mention popup so
the event does not propagate to insert a paragraph break
- Mirror focus-on-dismiss effect in v2 workspace ChatInputFooter so the
editor regains focus after the question overlay unmounts
- Fix trailing-slash paths rendering empty label in ClickableFilePath
by using || instead of ?? for the basename fallback
* chore: rebuild bun.lock after rebase
* Fix typecheck
* refactor(chat): align skill handling with upstream mastra
Removes the fork-dependent preload wiring (metadata.skills →
preloadSkills pass-through) that was a silent no-op on upstream
mastracode. Keeps the SkillToolCall renderer so load_skill tool
calls emitted by upstream's native skills system render with their
own UI.
Rewrites docs/skill-preload-feature.md to describe the upstream
agent-autonomous model (SKILL.md discovery in .claude/skills,
.agents/skills, .mastracode/skills).
* chore(deps): bump mastra to 0.15.0-alpha.3 / 1.26.0-alpha.3
Brings in upstream mastra's native skills system (search_skills +
load_skill tools, SKILL.md discovery via skillPaths) which the
SkillToolCall renderer in this PR now consumes for free.
- mastracode: 0.14.0 → 0.15.0-alpha.3
- @mastra/core: 1.25.0 → 1.26.0-alpha.3
- @mastra/mcp: 1.3.1 → 1.5.1-alpha.1
Applied in apps/desktop, packages/chat, packages/host-service.
* chore: alphabetize @tiptap/pm in desktop package.json
Auto-applied by biome/sherif.
* fix(chat): cut display polling to 4fps and restore query cache defaults (#3562)
Memory leak and CPU spiral root-caused to `staleTime: 0, gcTime: 0` +
60fps polling: React Query can't dedupe or GC anything, and the render
path churns allocations every 16ms.
Restoring the React Query defaults (5min gcTime) fixes the leak. Server
poll rate is independent of perceived stream smoothness — StreamingMessageText
already reveals text client-side at 60fps from whatever buffer the server
delivers. 4fps polling keeps that buffer fed with plenty of headroom.
Also removes the `isRunning` invalidation effect — redundant when the
query is polling.
Builds on and supersedes #3170 by @thepathmakerz, which diagnosed the
same root cause. This version takes the subtractive path (-21 lines)
instead of adaptive polling (+36).
Closes #3049
* feat(chat): slash command chip UX enhancements
- Argument editing inline in chip: auto-focus on insert, right-arrow to exit, double-click to re-enter
- Commands without argumentHint hide the colon/input entirely
- Model command shows a dropdown of available models (no free-form text)
- Chip input auto-sizes as user types (shrinks to content width)
- Dropdown positioned above chip (side="top"), ArrowUp/Down navigate options, Tab/Enter commit selection
- Menu reopens automatically when deleting value back to empty
- Preview popover and select dropdown are mutually exclusive (preview only on hover/node-select, never while arg input is focused)
- Focus shortcut hint moved inside TiptapPromptEditor (accepts focusShortcutText prop)
* chore: refresh bun.lock after pull
* test(chat): drop shallow ChatMessageList snapshot tests
These tests replace every child component with a mock placeholder
and assert on literal strings appearing in the rendered HTML. That
tested mock plumbing, not behavior — every SUT import change broke
them regardless of whether the actual render output changed, and
the 'SUBAGENT_EXECUTION_MESSAGE' assertion was checking for a
component this PR intentionally inlined.
The useful bit (filter/ordering logic in messageListHelpers) is
better covered by a direct unit test — leaving as a follow-up.
Summary
Alternative, subtractive version of #3170 by @thepathmakerz. Same root cause, same bug closed, ~-21 lines instead of +36.
Credit: all diagnostic work and the root-cause analysis belongs to @thepathmakerz in #3170. This PR just takes a more aggressive deletion path — please review alongside that PR and pick whichever fits better.
Root cause (same as #3170)
staleTime: 0, gcTime: 0combined with 60fps polling (fps = 60default, 16ms refetch) means React Query cannot dedupe or garbage-collect anything — it allocates and discards a fresh query cache entry ~60 times a second, forever. Over an hour that drifts into multi-GB heap and a CPU death spiral.This version
staleTime: 0, gcTime: 0from both hooks → React Query defaults (gcTime: 5min) kick back in. This is what fixes the leak.fps = 60 → fps = 4(250ms poll) → kills the CPU spiral at the source.isRunninginvalidationuseEffect→ redundant when the query is polling anyway; also drops now-unusedutils.No adaptive polling, no
wasRunningRef, no function-formrefetchInterval, noIDLE_*/GC_TIME_*constants.Why 4fps doesn't hurt streaming UX
StreamingMessageText(apps/desktop/.../StreamingMessageText.tsx) already animates text reveal client-side at its own 16ms interval (STREAM_TEXT_TICK_MS), revealing 2 chars/tick from whatever buffer the server has delivered. Perceived smoothness is decoupled from the server poll rate — the poll just needs to keep the reveal buffer fed, which 4fps does with plenty of headroom vs. typical LLM token arrival (~50–200ms chunks).Tradeoffs vs #3170
displayQuery's function-formrefetchIntervalandmessagesQuery's static one, no cross-sessionwasRunningRefedge case.Test plan
bun run --filter @superset/chat typecheckpassesbun run --filter @superset/desktop typecheckpassesbun test src/client/hooks/use-chat-display— 5/5 passCloses #3049
Summary by cubic
Fixes chat memory leak and CPU spikes by restoring React Query cache defaults and cutting display polling to 4fps. Streaming stays smooth because the client animates text reveal.
staleTime: 0andgcTime: 0so the default 5‑minute cache GC works.fpsdefault from 60 to 4 (250ms poll) in both chat display hooks.isRunninginvalidation effect in the desktop workspace hook.Written for commit 16356b7. Summary will update on new commits.
Summary by CodeRabbit