-
Notifications
You must be signed in to change notification settings - Fork 2.8k
feat: lossless terminal output with on-demand retrieval #10944
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Re-review complete on latest commit. No additional issues found beyond what is already addressed.
Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
Flagged a few correctness and contract issues to address before merge.
Mention @roomote in a comment to request specific changes to this pull request or fix all unresolved issues. |
- Remove unused barrel file (src/integrations/terminal/index.ts) to fix knip check - Fix Windows path test in OutputInterceptor.test.ts by using path.normalize() - Add missing translations for terminal.outputPreviewSize settings to all 17 locales
- Remove unused barrel file (src/integrations/terminal/index.ts) to fix knip check - Fix Windows path test in OutputInterceptor.test.ts by using path.normalize() - Add missing translations for terminal.outputPreviewSize settings to all 17 locales
53939b6 to
e61dc7c
Compare
…imit settings These settings were redundant with terminalOutputPreviewSize which controls the preview shown to the LLM. The line/char limits were for UI truncation which is now handled with hardcoded defaults (500 lines, 50K chars) since they don't need to be user-configurable. - Remove settings from packages/types schemas - Remove DEFAULT_TERMINAL_OUTPUT_CHARACTER_LIMIT constant - Update compressTerminalOutput() to use hardcoded limits - Update ExecuteCommandTool to not pass limit parameters - Update ClineProvider state handling - Update webview context and settings - Update tests to not use removed settings
- Replace single buffer with separate headBuffer and tailBuffer - Each buffer gets 50% of the preview budget - Head captures beginning of output, tail keeps rolling end - Middle content is dropped when output exceeds threshold - Preview shows: head + [omission indicator] + tail - Add tests for head/tail split behavior This approach ensures the LLM sees both: - The beginning (command startup, environment info, early errors) - The end (final results, exit codes, error summaries)
2e5bb4f to
d4680cd
Compare
- OutputInterceptor: Buffer ALL chunks before spilling to disk to preserve full content losslessly. Previously, the rolling tail buffer could drop middle content before the spill decision was made. - read_command_output schema: Include all properties in 'required' array for OpenAI strict mode compliance. With strict: true, all properties must be listed in required (optional ones use null union types).
Replace fs.readFile with chunked streaming in searchInArtifact() to keep memory usage bounded for large command outputs. Instead of loading the entire file into memory, reads in 64KB chunks and processes lines as they are encountered. This addresses the concern that loading 100MB+ build logs into memory defeats the purpose of the persisted output feature.
- OutputInterceptor.finalize() now awaits stream flush before returning This ensures artifact files are fully written before the artifact_id is advertised to the LLM, preventing partial reads. - Remove strict mode from read_command_output native tool schema With strict: true, OpenAI requires all params in 'required', forcing the LLM to provide explicit null values for optional params. This created verbose tool calls. Now optional params can be omitted entirely. - Update tests to handle async finalize() method
- Update RooTerminalCallbacks.onCompleted type to allow async callbacks (void | Promise<void>) - Track onCompleted completion with a promise and await it before using persistedResult - This fixes a race condition where exitDetails could be set before the async finalize() completes - Fix test callback to not return assignment value
daniel-lxs
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
- Update preview sizes: 2KB/4KB/8KB → 5KB/10KB/20KB (default 10KB) - Update read_command_output default limit: 32KB → 40KB - Match spec's MODEL_TRUNCATION_BYTES (10KB) for preview - Match spec's DEFAULT_MAX_OUTPUT_TOKENS (10000 tokens × 4 bytes = 40KB) for retrieval - Update all related tests and documentation
Update all 18 locale files with new preview size labels: - small: 2KB → 5KB - medium: 4KB → 10KB - large: 8KB → 20KB
When using search mode, the UI now shows the search pattern and match count instead of the misleading byte range (0 B - totalSize). - Added searchPattern and matchCount fields to ClineSayTool type - Updated ReadCommandOutputTool to return match count from search operations - Updated ChatRow to display 'search: "pattern" • N matches' for search mode
* Rename Roo Code Cloud Provider to Roo Code Router (RooCodeInc#10560) Co-authored-by: Roo Code <[email protected]> * chore: bump version to v1.102.0 (RooCodeInc#10604) * Update router name in types (RooCodeInc#10605) * Update Roo Code Router service name (RooCodeInc#10607) * chore: add changeset for v3.39.3 (RooCodeInc#10608) * Update router name in types (RooCodeInc#10610) * Changeset version bump (RooCodeInc#10609) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * feat: add debug mode configuration and state management * fix(settings): add debug condition for ZgsmAI custom config * Basic settings search (RooCodeInc#10619) * Prototype of a simpler searchable settings * Fix tests * UI improvements * Input tweaks * Update webview-ui/src/components/settings/SettingsSearch.tsx Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * fix: remove duplicate Escape key handler dead code * Cleanup * Fix tests --------- Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <[email protected]> * ux: UI improvements to search settings (RooCodeInc#10633) * UI changs * Update webview-ui/src/components/marketplace/MarketplaceView.tsx Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * i18n --------- Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * feat: display edit_file errors in UI after consecutive failures (RooCodeInc#10581) * feat(chat): enhance reasoning block with animated thinking indicator and varied messages * perf: optimize message block cloning in presentAssistantMessage (RooCodeInc#10616) * feat(chat): add random loading messages with localization * fix: correct Gemini 3 thought signature injection format via OpenRouter (RooCodeInc#10640) * fix: encode hyphens in MCP tool names before sanitization (RooCodeInc#10644) * fix: sanitize tool_use IDs to match API validation pattern (RooCodeInc#10649) * fix(path): return empty string from getReadablePath when path is empty - ROO-437 (RooCodeInc#10638) * ux: Standard stop button 🟥 (RooCodeInc#10639) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * fix: omit parallel_tool_calls when not explicitly enabled (COM-406) (RooCodeInc#10671) * fix: use placeholder for empty tool result content to fix Gemini API validation (RooCodeInc#10672) * ux: Further improve error display (RooCodeInc#10692) * Ensures error details are shown for all errors (except diff, which has its own case) * More details * litellm is a proxy * ux: improve stop button visibility and streamline error handling (RooCodeInc#10696) * Restores the send button in the message edit mode * Makes the stop button more prominent * fix: clear approval buttons when API request starts (ROO-526) (RooCodeInc#10702) * chore: add changeset for v3.40.0 (RooCodeInc#10705) * Changeset version bump (RooCodeInc#10706) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * feat(gemini): add allowedFunctionNames support to prevent mode switch errors (RooCodeInc#10708) Co-authored-by: Roo Code <[email protected]> * Release v3.40.1 (RooCodeInc#10713) * Changeset version bump (RooCodeInc#10714) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * Release: v1.105.0 (RooCodeInc#10722) * feat(providers): add gpt-5.2-codex model to openai-native provider (RooCodeInc#10731) * feat(e2e): Enable E2E tests - 39 passing tests (RooCodeInc#10720) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * Clear terminal output buffers to prevent memory leaks (RooCodeInc#7666) * feat: add OpenAI Codex provider with OAuth subscription authentication (RooCodeInc#10736) Co-authored-by: Roo Code <[email protected]> * fix(litellm): inject dummy thought signatures on ALL tool calls for Gemini (RooCodeInc#10743) * fix(e2e): add alwaysAllow config for MCP time server tools (RooCodeInc#10733) * Release v3.41.0 (RooCodeInc#10746) * Changeset version bump (RooCodeInc#10747) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * feat: clarify Slack and Linear are Cloud Team only features (RooCodeInc#10748) Co-authored-by: Roo Code <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * Release: v1.106.0 (RooCodeInc#10749) * refactor(chat): replace current task display with last user feedback * style(chat): adjust feedback text width calculation * fix: handle missing tool identity in OpenAI Native streams (RooCodeInc#10719) * Feat/issue 5376 aggregate subtask costs (RooCodeInc#10757) * feat(chat): add streaming state to task header interaction * feat: add settings tab titles to search index (RooCodeInc#10761) Co-authored-by: Roo Code <[email protected]> * fix: filter Ollama models without native tool support (RooCodeInc#10735) * fix: filter out empty text blocks from user messages for Gemini compatibility (RooCodeInc#10728) * fix: flatten top-level anyOf/oneOf/allOf in MCP tool schemas (RooCodeInc#10726) * fix: prevent duplicate tool_use IDs causing API 400 errors (RooCodeInc#10760) * fix: truncate call_id to 64 chars for OpenAI Responses API (RooCodeInc#10763) * fix: Gemini thought signature validation errors (RooCodeInc#10694) Co-authored-by: Roo Code <[email protected]> * Release v3.41.1 (RooCodeInc#10767) * Changeset version bump (RooCodeInc#10768) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * feat: add button to open markdown in VSCode preview (RooCodeInc#10773) Co-authored-by: Roo Code <[email protected]> * fix(openai-codex): reset invalid model selection (RooCodeInc#10777) * fix: add openai-codex to providers that don't require API key (RooCodeInc#10786) Co-authored-by: Roo Code <[email protected]> * fix(litellm): detect Gemini models with space-separated names for thought signature injection (RooCodeInc#10787) * Release v3.41.2 (RooCodeInc#10788) * Changeset version bump (RooCodeInc#10790) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * Roo Code Router fixes for the cli (RooCodeInc#10789) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * Revert "feat(e2e): Enable E2E tests - 39 passing tests" (RooCodeInc#10794) Co-authored-by: Hannes Rudolph <[email protected]> * Claude-like cli flags, auth fixes (RooCodeInc#10797) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <[email protected]> * Release cli v0.0.47 (RooCodeInc#10798) * Use a redirect instead of a fetch for cli auth (RooCodeInc#10799) * chore(cli): prepare release v0.0.48 (RooCodeInc#10800) * Fix thinking block word-breaking to prevent horizontal scroll (RooCodeInc#10806) Co-authored-by: Roo Code <[email protected]> * chore: add changeset for v3.41.3 (RooCodeInc#10822) * Removal of glm4 6 (RooCodeInc#10815) Co-authored-by: Matt Rubens <[email protected]> * Changeset version bump (RooCodeInc#10823) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * feat: warn users when too many MCP tools are enabled (RooCodeInc#10772) * feat: warn users when too many MCP tools are enabled - Add WarningRow component for displaying generic warnings with icon, title, message, and optional docs link - Add TooManyToolsWarning component that shows when users have more than 40 MCP tools enabled - Add MAX_MCP_TOOLS_THRESHOLD constant (40) - Add i18n translations for the warning message - Integrate warning into ChatView to display after task header - Add comprehensive tests for both components Closes ROO-542 * Moves constant to the right place * Move it to the backend * i18n * Add actionlink that takes you to MCP settings in this case * Add to MCP settings too * Bump max tools up to 60 since github itself has 50+ * DRY * Fix test --------- Co-authored-by: Roo Code <[email protected]> Co-authored-by: Bruno Bergher <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * Support different cli output formats: text, json, streaming json (RooCodeInc#10812) Co-authored-by: Roo Code <[email protected]> * chore(cli): prepare release v0.0.49 (RooCodeInc#10825) * fix(cli): set integrationTest to true in ExtensionHost constructor (RooCodeInc#10826) * fix(cli): fix quiet mode tests by capturing console before host creation (RooCodeInc#10827) * refactor: unify user content tags to <user_message> (RooCodeInc#10723) Co-authored-by: Roo Code <[email protected]> * feat(openai-codex): add ChatGPT subscription usage limits dashboard (RooCodeInc#10813) * perf(webview): avoid resending taskHistory in state updates (RooCodeInc#10842) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * Fix broken link on pricing page (RooCodeInc#10847) * fix: update broken pricing link to /models page * Update apps/web-roo-code/src/app/pricing/page.tsx --------- Co-authored-by: Roo Code <[email protected]> Co-authored-by: Bruno Bergher <[email protected]> * Git worktree management (RooCodeInc#10458) Co-authored-by: Roo Code <[email protected]> Co-authored-by: Hannes Rudolph <[email protected]> Co-authored-by: daniel-lxs <[email protected]> * feat: enable prompt caching for Cerebras zai-glm-4.7 model (RooCodeInc#10670) Co-authored-by: Roo Code <[email protected]> * feat: add Kimi K2 thinking model to VertexAI provider (RooCodeInc#9269) Co-authored-by: Roo Code <[email protected]> * feat: standardize model selectors across all providers (RooCodeInc#10294) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <[email protected]> * chore: remove XML tool calling support (RooCodeInc#10841) Co-authored-by: daniel-lxs <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * Fix broken link on pricing page (RooCodeInc#10847) * fix: update broken pricing link to /models page * Update apps/web-roo-code/src/app/pricing/page.tsx --------- Co-authored-by: Roo Code <[email protected]> Co-authored-by: Bruno Bergher <[email protected]> * Pr 10853 (RooCodeInc#10854) Co-authored-by: Roo Code <[email protected]> * Git worktree management (RooCodeInc#10458) Co-authored-by: Roo Code <[email protected]> Co-authored-by: Hannes Rudolph <[email protected]> Co-authored-by: daniel-lxs <[email protected]> * feat: enable prompt caching for Cerebras zai-glm-4.7 model (RooCodeInc#10670) Co-authored-by: Roo Code <[email protected]> (cherry picked from commit c7ce8aa) * feat: add Kimi K2 thinking model to VertexAI provider (RooCodeInc#9269) Co-authored-by: Roo Code <[email protected]> (cherry picked from commit a060915) * feat: standardize model selectors across all providers (RooCodeInc#10294) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <[email protected]> (cherry picked from commit e356d05) * fix: resolve race condition in context condensing prompt input (RooCodeInc#10876) * Copy: update /slack page messaging (RooCodeInc#10869) copy: update /slack page messaging - Update trial CTA to 'Start a free 14 day Team trial' - Replace 'humans' with 'your team' in value props subtitle - Shorten value prop titles for consistent one-line display - Improve Thread-aware and Open to all descriptions * fix: Handle mode selector empty state on workspace switch (RooCodeInc#9674) * fix: handle mode selector empty state on workspace switch When switching between VS Code workspaces, if the current mode from workspace A is not available in workspace B, the mode selector would show an empty string. This fix adds fallback logic to automatically switch to the default "code" mode when the current mode is not found in the available modes list. Changes: - Import defaultModeSlug from @roo/modes - Add fallback logic in selectedMode useMemo to detect when current mode is not available and automatically switch to default mode - Add tests to verify the fallback behavior works correctly - Export defaultModeSlug in test mock for consistent behavior * fix: prevent infinite loop by moving fallback notification to useEffect * fix: prevent infinite loop by using ref to track notified invalid mode * refactor: clean up comments in ModeSelector fallback logic --------- Co-authored-by: Roo Code <[email protected]> Co-authored-by: daniel-lxs <[email protected]> * Roo to main remove xml (#936) * Fix broken link on pricing page (RooCodeInc#10847) * fix: update broken pricing link to /models page * Update apps/web-roo-code/src/app/pricing/page.tsx --------- Co-authored-by: Roo Code <[email protected]> Co-authored-by: Bruno Bergher <[email protected]> * Git worktree management (RooCodeInc#10458) Co-authored-by: Roo Code <[email protected]> Co-authored-by: Hannes Rudolph <[email protected]> Co-authored-by: daniel-lxs <[email protected]> * feat: enable prompt caching for Cerebras zai-glm-4.7 model (RooCodeInc#10670) Co-authored-by: Roo Code <[email protected]> * feat: add Kimi K2 thinking model to VertexAI provider (RooCodeInc#9269) Co-authored-by: Roo Code <[email protected]> * feat: standardize model selectors across all providers (RooCodeInc#10294) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <[email protected]> * chore: remove XML tool calling support (RooCodeInc#10841) Co-authored-by: daniel-lxs <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * Pr 10853 (RooCodeInc#10854) Co-authored-by: Roo Code <[email protected]> * feat(commit): enhance git diff handling for new repositories * feat(task): support fake_tool_call for Qwen model with XML tool call format * feat(prompts): add snapshots for custom instructions and system prompt variations --------- Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <[email protected]> Co-authored-by: Bruno Bergher <[email protected]> Co-authored-by: Chris Estreich <[email protected]> Co-authored-by: Hannes Rudolph <[email protected]> Co-authored-by: daniel-lxs <[email protected]> Co-authored-by: Matt Rubens <[email protected]> Co-authored-by: MP <[email protected]> * feat: remove Claude Code provider (RooCodeInc#10883) * refactor: migrate context condensing prompt to customSupportPrompts and cleanup legacy code (RooCodeInc#10881) * refactor: unify export path logic and default to Downloads (RooCodeInc#10882) * Fix marketing site preview logic (RooCodeInc#10886) * feat(web): redesign Slack page Featured Workflow section with YouTube… (RooCodeInc#10880) Co-authored-by: Matt Rubens <[email protected]> Co-authored-by: Roo Code <[email protected]> * feat: add size-based progress tracking for worktree file copying (RooCodeInc#10871) * feat: add HubSpot tracking with consent-based loading (RooCodeInc#10885) Co-authored-by: Roo Code <[email protected]> * fix(openai): prevent double emission of text/reasoning in native and codex handlers (RooCodeInc#10888) * Fix padding on Roo Code Cloud upsell (RooCodeInc#10889) Co-authored-by: Roo Code <[email protected]> * Open the worktreeinclude file after creating it (RooCodeInc#10891) Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> * fix: prevent task abortion when resuming via IPC/bridge (RooCodeInc#10892) * fix: rename bot to 'Roomote' and fix spacing in Slack demo (RooCodeInc#10898) * Fix: Enforce file restrictions for all editing tools (RooCodeInc#10896) Co-authored-by: Roo Code <[email protected]> * chore: clean up XML legacy code and native-only comments (RooCodeInc#10900) * feat: hide worktree feature from menus (RooCodeInc#10899) * fix(condense): remove custom condensing model option (RooCodeInc#10901) * fix(condense): remove custom condensing model option Remove the ability to specify a different model/API configuration for condensing conversations. Modern conversations include provider-specific data (tool calls, reasoning blocks, thought signatures) that only the originating model can properly understand and summarize. Changes: - Remove condensingApiHandler parameter from summarizeConversation() - Remove condensingApiConfigId from context management and Task - Remove API config dropdown for CONDENSE in settings UI - Update telemetry to remove usedCustomApiHandler parameter - Update related tests Users can still customize the CONDENSE prompt text; only model selection is removed. * fix: remove condensingApiConfigId from types and test fixtures --------- Co-authored-by: Roo Code <[email protected]> * test(prompts): update snapshots to fix indentation * Fix EXT-553: Remove percentage-based progress tracking for worktree file copying (RooCodeInc#10905) * Fix EXT-553: Remove percentage-based progress tracking for worktree file copying - Removed totalBytes from CopyProgress interface - Removed Math.min() clamping that caused stuck-at-100% issue - Changed UI from progress bar to spinner with activity indicator - Shows 'item — X MB copied' instead of percentage - Updated all 18 locale files - Uses native cp with polling (no new dependencies) * fix: translate copyingProgress text in all 17 non-English locale files --------- Co-authored-by: Roo Code <[email protected]> * chore(prompts): clarify linked SKILL.md file handling (RooCodeInc#10907) * Release v3.42.0 (RooCodeInc#10910) * Changeset version bump (RooCodeInc#10911) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * feat: Move condense prompt editor to Context Management tab (RooCodeInc#10909) * feat: Update Z.AI models with new variants and pricing (RooCodeInc#10860) Co-authored-by: erdemgoksel <erdemgoksel@MAU-BILISIM42> * fix: correct Gemini 3 pricing for Flash and Pro models (RooCodeInc#10487) Co-authored-by: Roo Code <[email protected]> * feat: add pnpm install:vsix:nightly command (RooCodeInc#10912) * Intelligent Context Condensation v2 (RooCodeInc#10873) * feat(gemini): add tool call support for Gemini CLI * feat(condense): improve condensation with environment details, accurate token counts, and lazy evaluation (RooCodeInc#10920) * docs: fix CLI README to use correct command syntax (RooCodeInc#10923) Co-authored-by: Roo Code <[email protected]> * chore: remove diffEnabled and fuzzyMatchThreshold settings (RooCodeInc#10298) * Remove Merge button from worktrees (RooCodeInc#10924) Co-authored-by: Roo Code <[email protected]> * chore: remove POWER_STEERING experimental feature (RooCodeInc#10926) - Remove powerSteering from experimentIds array and schema in packages/types - Remove POWER_STEERING from EXPERIMENT_IDS and experimentConfigsMap - Remove power steering conditional block from getEnvironmentDetails - Remove POWER_STEERING entry from all 18 locale settings.json files - Update related test files to remove power steering references * chore: remove MULTI_FILE_APPLY_DIFF experiment (RooCodeInc#10925) * chore: remove MULTI_FILE_APPLY_DIFF experiment Remove the 'Enable concurrent file edits' experimental feature that allowed editing multiple files in a single apply_diff call. - Remove multiFileApplyDiff from experiment types and config - Delete MultiFileSearchReplaceDiffStrategy class and tests - Delete MultiApplyDiffTool wrapper and tests - Remove experiment-specific code paths in Task.ts, generateSystemPrompt.ts, and presentAssistantMessage.ts - Remove special handling in ExperimentalSettings.tsx - Remove translations from all 18 locale files The existing MultiSearchReplaceDiffStrategy continues to handle multiple SEARCH/REPLACE blocks within a single file. * fix: remove unused EXPERIMENT_IDS/experiments import from Task.ts Addresses review feedback: removes the unused imports from src/core/task/Task.ts that were left over after removing the MULTI_FILE_APPLY_DIFF experiment routing code. * fix: convert orphaned tool_results to text blocks after condensing (RooCodeInc#10927) * fix: convert orphaned tool_results to text blocks after condensing When condensing occurs after assistant sends tool_uses but before user responds, the tool_use blocks get condensed away. User messages containing tool_results that reference condensed tool_use_ids become orphaned and get filtered out by getEffectiveApiHistory, causing user feedback to be lost. This fix enhances the existing check in addToApiConversationHistory to detect when the previous effective message is not an assistant and converts any tool_result blocks to text blocks, preventing them from being filtered as orphans. The conversion happens at the latest possible moment (message insertion) because: - Tool results are created before we know if condensing will occur - We need actual effective history state to make the decision - This is the last checkpoint before orphan filtering happens * Only include environment details in summary for automatic condensing For automatic condensing (during attemptApiRequest), environment details are included in the summary because the API request is already in progress and the next user message won't have fresh environment details injected. For manual condensing (via condenseContext button), environment details are NOT included because fresh details will be injected on the very next turn via getEnvironmentDetails() in recursivelyMakeClineRequests(). This uses the existing isAutomaticTrigger flag to differentiate behavior. --------- Co-authored-by: Hannes Rudolph <[email protected]> * refactor: remove legacy XML tool calling code (getToolDescription) (RooCodeInc#10929) - Remove getToolDescription() method from MultiSearchReplaceDiffStrategy - Remove getToolDescription() from DiffStrategy interface - Remove unused ToolDescription type from shared/tools.ts - Remove unused eslint-disable directive - Update test mocks to remove getToolDescription references - Remove getToolDescription tests from multi-search-replace.spec.ts Native tools are now defined in src/core/prompts/tools/native-tools/ using the OpenAI function format. The removed code was dead since XML-style tool calling was replaced with native tool calling. * Fix duplicate model display for OpenAI Codex provider (RooCodeInc#10930) Co-authored-by: Roo Code <[email protected]> * Skip thoughtSignature blocks during markdown export RooCodeInc#10199 (RooCodeInc#10932) * fix: use json-stream-stringify for pretty-printing MCP config files (RooCodeInc#9864) Co-authored-by: Roo Code <[email protected]> * fix: auto-migrate v1 condensing prompt and handle invalid providers on import (RooCodeInc#10931) * chore: add changeset for v3.43.0 (RooCodeInc#10933) * Changeset version bump (RooCodeInc#10934) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * Replace hyphen encoding with fuzzy matching for MCP tool names (RooCodeInc#10775) * fix: truncate AWS Bedrock toolUseId to 64 characters (RooCodeInc#10902) * feat: remove MCP SERVERS section from system prompt (RooCodeInc#10895) Co-authored-by: Roo Code <[email protected]> * feat(tools): enhance Gemini CLI with new tool descriptions and formatting * feat(task): improve smart mistake detector with error source tracking and refined auto-switching * feat: update Fireworks provider with new models (RooCodeInc#10679) Fixes RooCodeInc#10674 Added new models: - MiniMax M2.1 (minimax-m2p1) - DeepSeek V3.2 (deepseek-v3p2) - GLM-4.7 (glm-4p7) - Llama 3.3 70B Instruct (llama-v3p3-70b-instruct) - Llama 4 Maverick Instruct (llama4-maverick-instruct-basic) - Llama 4 Scout Instruct (llama4-scout-instruct-basic) All models include correct pricing, context windows, and capabilities. * ux: Improve subtask visibility and navigation in history and chat views (RooCodeInc#10864) * Taskheader * Subtask messages * View subtask * subtasks in history items * i18n * Table * Lighter visuals * bug * fix: Align tests with implementation behavior * refactor: extract CircularProgress component from TaskHeader - Created reusable CircularProgress component for displaying percentage as a ring - Moved inline SVG calculation from TaskHeader.tsx to dedicated component - Added comprehensive tests for CircularProgress component (14 tests) - Component supports customizable size, strokeWidth, and className - Includes proper accessibility attributes (progressbar role, aria-valuenow) * chore: update StandardTooltip default delay to 600ms As mentioned in the PR description, increased the tooltip delay to 600ms for less intrusive tooltips. The delay is still configurable via the delay prop for components that need a different value. --------- Co-authored-by: Roo Code <[email protected]> * fix(types): remove unsupported Fireworks model tool fields (RooCodeInc#10937) fix(types): remove unsupported tool capability fields from Fireworks model metadata Co-authored-by: Roo Code <[email protected]> * ux: improve worktree selector and creation UX (RooCodeInc#10940) * Delete modal * Restructured * Much more prominent * UI * i18n * Fixes i18n * Remove mergeresultmodel * i18n * tests * knip * code review * feat: add wildcard support for MCP alwaysAllow configuration (RooCodeInc#10948) Co-authored-by: Roo Code <[email protected]> * fix: restore opaque background to settings section headers (RooCodeInc#10951) Co-authored-by: Roo Code <[email protected]> Co-authored-by: Matt Rubens <[email protected]> * Update and improve zh-TW Traditional Chinese locale and docs (RooCodeInc#10953) * chore: remove POWER_STEERING experiment remnants (RooCodeInc#10980) * fix: record truncation event when condensation fails but truncation succeeds (RooCodeInc#10984) * feat: new_task tool creates checkpoint the same way write_to_file does (RooCodeInc#10982) * fix: VS Code LM token counting returns 0 outside requests, breaking context condensing (EXT-620) (RooCodeInc#10983) - Modified VsCodeLmHandler.internalCountTokens() to create temporary cancellation tokens when needed - Token counting now works both during and outside of active requests - Added 4 new tests to verify the fix and prevent regression - Resolves issue where VS Code LM API users experienced context overflow errors * fix: prevent nested condensing from including previously-condensed content (RooCodeInc#10985) * fix: use --force by default when deleting worktrees (RooCodeInc#10986) Co-authored-by: Roo Code <[email protected]> * chore: add changeset for v3.44.0 (RooCodeInc#10987) * Changeset version bump (RooCodeInc#10989) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * Fix LiteLLM tool ID validation errors for Bedrock proxy (RooCodeInc#10990) * Enable parallel tool calling with new_task isolation safeguards (RooCodeInc#10979) Co-authored-by: Matt Rubens <[email protected]> Co-authored-by: Hannes Rudolph <[email protected]> * Add quality checks to marketing site deployment workflows (RooCodeInc#10959) Co-authored-by: cte <[email protected]> * Add temperature=0.9 and top_p=0.95 to zai-glm-4.7 model (RooCodeInc#10945) Co-authored-by: Matt Rubens <[email protected]> * Revert "Enable parallel tool calling with new_task isolation safeguards" (RooCodeInc#11004) * Release v3.44.1 (RooCodeInc#11003) * Changeset version bump (RooCodeInc#11005) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * Revert "Revert "Enable parallel tool calling with new_task isolation safeguards"" (RooCodeInc#11006) * Fix local model validation error for Ollama models (RooCodeInc#10893) fix: prevent false validation error for local Ollama models The validation logic was checking against an empty router models object that was initialized but never populated for Ollama. This caused false validation errors even when models existed locally. Now only validates against router models if they actually contain data, preventing the false error when using local Ollama models. Fixes ROO-581 Co-authored-by: Roo Code <[email protected]> * feat: enhance tool call parsing and enable smart mistake detection * fix: use relative paths in isPathInIgnoredDirectory to fix worktree indexing (RooCodeInc#11009) * fix: remove duplicate tool_call emission from Responses API providers (RooCodeInc#11008) * Release v3.44.2 (RooCodeInc#11025) * Changeset version bump (RooCodeInc#11027) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * feat(condense v2.1): add smart code folding (RooCodeInc#10942) * feat(condense): add smart code folding with tree-sitter signatures At context condensation time, use tree-sitter to generate folded code signatures (function definitions, class declarations) for files read during the conversation. Each file is included as its own <system-reminder> block in the condensed summary, preserving structural awareness without consuming excessive tokens. - Add getFilesReadByRoo() method to FileContextTracker - Create generateFoldedFileContext() using tree-sitter parsing - Update summarizeConversation() to accept array of file sections - Each file gets its own content block in the summary message - Add comprehensive test coverage (12 tests) * fix: skip tree-sitter error strings in folded file context - Add isTreeSitterErrorString helper to detect error messages - Skip files that return error strings instead of embedding them - Add test for error string handling * refactor: move generateFoldedFileContext() inside summarizeConversation() - Update summarizeConversation() to accept filesReadByRoo, cwd, rooIgnoreController instead of pre-generated sections - Move folded file context generation inside summarizeConversation() (lines 319-339) - Update ContextManagementOptions type and manageContext() to pass new parameters - Remove generateFoldedFileContext from Task.ts imports - folding now handled internally - Update all tests to use new parameter signature - Reduces Task.ts complexity by moving folding logic to summarization module * fix: prioritize most recently read files in folded context Files are now sorted by roo_read_date descending before folded context generation, so if the character budget runs out, the most relevant (recently read) files are included and older files are skipped. * refactor: improve code quality in condense module - Convert summarizeConversation to use options object instead of 11 positional params - Extract duplicated getFilesReadByRoo error handling into helper method - Remove unnecessary re-export of generateFoldedFileContext - Update all test files to use new options object pattern * fix: address roomote feedback - batch error logging and early budget exit --------- Co-authored-by: Roo Code <[email protected]> Co-authored-by: daniel-lxs <[email protected]> * Release v3.45.0 (RooCodeInc#11036) * Changeset version bump (RooCodeInc#11037) Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Matt Rubens <[email protected]> * fix: include reserved output tokens in task header percentage calculation (RooCodeInc#11034) Co-authored-by: Roo Code <[email protected]> * feat: lossless terminal output with on-demand retrieval (RooCodeInc#10944) * refactor(core): optimize mistake detection and model switching logic --------- Co-authored-by: roomote[bot] <219738659+roomote[bot]@users.noreply.github.com> Co-authored-by: Roo Code <[email protected]> Co-authored-by: Matt Rubens <[email protected]> Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com> Co-authored-by: Bruno Bergher <[email protected]> Co-authored-by: Daniel <[email protected]> Co-authored-by: Archimedes <[email protected]> Co-authored-by: Patrick Decat <[email protected]> Co-authored-by: Hannes Rudolph <[email protected]> Co-authored-by: T <[email protected]> Co-authored-by: Chris Estreich <[email protected]> Co-authored-by: Seb Duerr <[email protected]> Co-authored-by: daniel-lxs <[email protected]> Co-authored-by: MP <[email protected]> Co-authored-by: Erdem <[email protected]> Co-authored-by: erdemgoksel <erdemgoksel@MAU-BILISIM42> Co-authored-by: rossdonald <[email protected]> Co-authored-by: Michaelzag <[email protected]> Co-authored-by: Thanh Nguyen <[email protected]> Co-authored-by: Peter Dave Hello <[email protected]>
Summary
Refactors terminal command output handling to preserve full output losslessly. Instead of truncating large outputs and losing information, Roo now saves complete output to disk and provides the LLM with a preview plus the ability to retrieve the full content on demand.
Closes #10941
Problem
Users experienced several pain points with the previous terminal output handling:
npm install,cargo build, test suites, or other commands that produce significant output, important error messages at the end could get truncated awaySolution
This PR implements a "persisted output" pattern:
read_command_outputtool lets the LLM fetch the complete output or search for specific patterns when neededUser-Facing Changes
Settings
Before: Two confusing sliders
After: One simple dropdown
Output Format
Small output (under preview threshold): Works exactly as before - full output shown inline.
Large output (over preview threshold): LLM receives:
[...N bytes omitted...]indicator in the middlecmd-{timestamp}.txt)read_command_outputfor full content or searchTechnical Details
Output Limits (Aligned with Terminal Integration Spec)
MODEL_TRUNCATION_BYTES(10KB)read_command_outputdefaultDEFAULT_MAX_OUTPUT_TOKENS(10,000 tokens × 4 bytes)New Tool:
read_command_outputStorage
Artifacts are stored at:
globalStoragePath/tasks/{taskId}/command-output/cmd-{timestamp}.txtFiles Changed
packages/types/src/global-settings.ts- Updated preview size constants (5KB/10KB/20KB)src/core/tools/ReadCommandOutputTool.ts- Updated default limit to 40KBsrc/core/prompts/tools/native-tools/read_command_output.ts- Updated tool descriptionsrc/integrations/terminal/OutputInterceptor.ts- Head/tail buffer implementationsrc/core/tools/ExecuteCommandTool.ts- Integration with OutputInterceptorTesting
OutputInterceptorReadCommandOutputTool