Skip to content

Conversation

@roomote
Copy link
Contributor

@roomote roomote bot commented Aug 2, 2025

This PR reduces the minimum thinking budget for Gemini 2.5 Pro models from 1024 to 128 tokens, as requested. The default thinking budget for these models is also changed from 8192 to 128.

Changes:

  • Added GEMINI_25_PRO_MIN_THINKING_TOKENS constant set to 128
  • Updated model-params.ts to detect Gemini 2.5 Pro models and apply the 128 minimum
  • Changed default thinking tokens from 8192 to 128 for Gemini 2.5 Pro models only
  • Added comprehensive tests to verify the new behavior

Important: This change ONLY affects Gemini 2.5 Pro models. All other models retain their 1024 minimum thinking budget. Hybrid reasoning and other model types are not affected.


Important

Reduces Gemini 2.5 Pro models' minimum and default thinking budget to 128 tokens, updating logic and UI components accordingly.

  • Behavior:
    • Reduces minimum thinking budget for Gemini 2.5 Pro models from 1024 to 128 tokens in model-params.ts.
    • Changes default thinking budget for Gemini 2.5 Pro models from 8192 to 128 tokens in model-params.ts.
    • Ensures reasoning budget is clamped to at least 128 tokens for Gemini 2.5 Pro models.
  • UI:
    • Updates ThinkingBudget.tsx to reflect new minimum and step values for Gemini 2.5 Pro models.
    • Adjusts slider behavior to use 128 as the step for Gemini 2.5 Pro models.
  • Tests:
    • Adds tests in model-params.spec.ts to verify new minimum and default thinking budget for Gemini 2.5 Pro models.
    • Adds tests in ThinkingBudget.spec.tsx to ensure UI reflects new behavior for Gemini 2.5 Pro models.

This description was created by Ellipsis for 1fa71fb. You can customize this summary. It will automatically update as commits are pushed.

- Add GEMINI_25_PRO_MIN_THINKING_TOKENS constant set to 128
- Update model-params.ts to detect Gemini 2.5 Pro models and use 128 as minimum
- Change default thinking tokens from 8192 to 128 for Gemini 2.5 Pro models
- Add tests to verify the new 128 minimum for Gemini 2.5 Pro
- Other models remain unaffected with 1024 minimum thinking budget
@roomote roomote bot requested review from cte, jr and mrubens as code owners August 2, 2025 06:13
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. enhancement New feature or request labels Aug 2, 2025
Copy link
Contributor Author

@roomote roomote bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewing my own code is like debugging in production—technically possible, but morally questionable.

max_tokens: 128,
},
})
})
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding test cases for model ID variations and edge cases:

  • Model IDs like "gemini-2.5-pro-latest" or "vertex-ai/gemini-2.5-pro"
  • The interaction between the 128 minimum and 80% rule when maxTokens is very small (e.g., 150)

// If `customMaxThinkingTokens` is not specified use the default.
reasoningBudget = customMaxThinkingTokens ?? DEFAULT_HYBRID_REASONING_MODEL_THINKING_TOKENS
// For Gemini 2.5 Pro, default to 128 instead of 8192
const defaultThinkingTokens = isGemini25Pro
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we make this comment more explicit about why Gemini 2.5 Pro has a different default? Perhaps mention performance or cost considerations that led to this decision?

@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Aug 2, 2025
@hannesrudolph
Copy link
Collaborator

@roomote-agent run pr fixer

@roomote
Copy link
Contributor Author

roomote bot commented Aug 2, 2025

Hi @hannesrudolph! I see your request to run PR fixer. Let me check out this PR and investigate what needs to be fixed. All CI checks are currently passing, so I'll review the code changes to ensure everything is implemented correctly.

- Fixed duplicate modelId.includes("gemini-2.5-pro") check
- Now properly checks for both "gemini-2.5-pro" and "gemini-25-pro" variants
@roomote
Copy link
Contributor Author

roomote bot commented Aug 2, 2025

Hi @hannesrudolph! I've successfully fixed the issue in this PR.

Summary of changes:

  • Fixed the duplicate condition in src/api/transform/model-params.ts line 95
  • Changed from modelId.includes("gemini-2.5-pro") || modelId.includes("gemini-2.5-pro") to modelId.includes("gemini-2.5-pro") || modelId.includes("gemini-25-pro")
  • This now properly checks for both "gemini-2.5-pro" and "gemini-25-pro" model ID variants

Testing:

  • All existing tests pass ✅
  • CI checks are all green ✅

The fix has been pushed to the PR branch and is ready for review.

@daniel-lxs daniel-lxs moved this from Triage to PR [Needs Prelim Review] in Roo Code Roadmap Aug 2, 2025
@hannesrudolph hannesrudolph added PR - Needs Preliminary Review and removed Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. labels Aug 2, 2025
@daniel-lxs
Copy link
Member

@roomote-agent This change should also affect the webview-ui/src/components/settings/ApiOptions.tsx component to allow the user to set the max reasoning tokens to 128. Push to this PR, do not create a new one.

@roomote
Copy link
Contributor Author

roomote bot commented Aug 5, 2025

Hi @daniel-lxs! I see your request to update the webview-ui/src/components/settings/ApiOptions.tsx component to allow users to set the max reasoning tokens to 128. I'll work on this change and push it to this PR branch. Let me get started on that now.

…n UI

- Update ThinkingBudget component to detect Gemini 2.5 Pro models
- Set minimum thinking tokens to 128 for Gemini 2.5 Pro (instead of 1024)
- Adjust step size to 128 for Gemini 2.5 Pro models
- Add tests to verify the new behavior
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. and removed size:M This PR changes 30-99 lines, ignoring generated files. labels Aug 5, 2025
@roomote
Copy link
Contributor Author

roomote bot commented Aug 5, 2025

Hi @daniel-lxs! I've successfully implemented the requested changes to allow users to set the max reasoning tokens to 128 for Gemini 2.5 Pro models in the UI.

Changes made:

  • ✅ Updated ThinkingBudget.tsx component to detect Gemini 2.5 Pro models
  • ✅ Set minimum thinking tokens to 128 for Gemini 2.5 Pro models (instead of the default 1024)
  • ✅ Adjusted the step size to 128 for Gemini 2.5 Pro models for better granularity
  • ✅ Added comprehensive tests to verify the new behavior
  • ✅ All tests are passing

The changes have been pushed to this PR branch. The UI will now properly allow Gemini 2.5 Pro users to set their thinking budget as low as 128 tokens, matching the backend changes that were already implemented.

CI checks are currently running and should complete shortly.

Copy link
Member

@daniel-lxs daniel-lxs left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Aug 5, 2025
@daniel-lxs daniel-lxs moved this from PR [Needs Prelim Review] to PR [Needs Review] in Roo Code Roadmap Aug 5, 2025

if (shouldUseReasoningBudget({ model, settings })) {
// Check if this is a Gemini 2.5 Pro model
const isGemini25Pro = modelId.includes("gemini-2.5-pro") || modelId.includes("gemini-25-pro")
Copy link
Collaborator

@mrubens mrubens Aug 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Are there really gemini-25-pro models?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, I don't think so

@daniel-lxs
Copy link
Member

@roomote-agent remove the check for gemini-25-pro since that model doesn't exist

@roomote
Copy link
Contributor Author

roomote bot commented Aug 5, 2025

Hi @daniel-lxs, I see your comment about removing the check for the non-existent "gemini-25-pro" model. Let me investigate and fix this for you.

- Remove gemini-25-pro check from model-params.ts
- Remove gemini-25-pro check from ThinkingBudget.tsx
- Remove test case for gemini-25-pro alternative naming

The gemini-25-pro model identifier does not exist. Only gemini-2.5-pro is valid.
@roomote
Copy link
Contributor Author

roomote bot commented Aug 5, 2025

Hi @daniel-lxs, I've successfully removed all references to the non-existent "gemini-25-pro" model identifier from the codebase.

Here's what I changed:

  • ✅ Removed || modelId.includes("gemini-25-pro") check from src/api/transform/model-params.ts (line 95)
  • ✅ Removed || selectedModelId.includes("gemini-25-pro") check from webview-ui/src/components/settings/ThinkingBudget.tsx (line 28)
  • ✅ Removed the test case "should use min thinking tokens of 128 for Gemini 25 Pro models (alternative naming)" from webview-ui/src/components/settings/__tests__/ThinkingBudget.spec.tsx

The code now only checks for the valid "gemini-2.5-pro" model identifier. All tests are passing locally. The changes have been pushed to the PR branch and CI checks are running.

@mrubens mrubens merged commit 263e317 into main Aug 5, 2025
10 checks passed
@mrubens mrubens deleted the feature/gemini-25-pro-min-thinking-budget-128 branch August 5, 2025 22:43
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Aug 5, 2025
@github-project-automation github-project-automation bot moved this from PR [Needs Review] to Done in Roo Code Roadmap Aug 5, 2025
fxcl added a commit to tameslabs/Roo-Cline that referenced this pull request Aug 9, 2025
* main:
  Changeset version bump (RooCodeInc#6826)
  Update contributors list (RooCodeInc#6636)
  chore: add changeset for v3.25.10 (RooCodeInc#6825)
  feat: add GPT-5 model support (RooCodeInc#6819)
  fix: add missing MCP error translation keys (RooCodeInc#6821)
  fix: use CDATA sections in XML examples to prevent parser errors (RooCodeInc#4852) (RooCodeInc#6811)
  Changeset version bump (RooCodeInc#6810)
  chore: add changeset for v3.25.9 (RooCodeInc#6809)
  Fix rounding of max tokens (RooCodeInc#6808)
  feat: add GLM-4.5 and OpenAI gpt-oss models to Fireworks provider (RooCodeInc#6784)
  feat: focus chat input when clicking plus button in extension menu (RooCodeInc#6689)
  Support linking to @roo-code/cloud in Roo-Code repo (RooCodeInc#6799)
  Bring back "Use @roo-code/cloud from npm" (RooCodeInc#6795)
  Changeset version bump (RooCodeInc#6790)
  Release v3.25.8 (RooCodeInc#6789)
  fix: prevent unnecessary MCP server refresh on settings save (RooCodeInc#6772) (RooCodeInc#6779)
  fix: Replace scrollToIndex with scrollTo to fix scroll jitter (RooCodeInc#6780)
  Clamp default model max tokens to 20% of context window (RooCodeInc#6761)
  fix: handle current directory path "." correctly in codebase_search tool (RooCodeInc#6517)
  fix: recover from error state when Qdrant becomes available (RooCodeInc#6661)
  Revert "Use @roo-code/cloud from npm" (RooCodeInc#6742)
  fix: prevent disabled MCP servers from starting processes and show correct status (RooCodeInc#6084)
  feat: reduce Gemini 2.5 Pro minimum thinking budget to 128 (RooCodeInc#6588)
  fix: trim whitespace from OpenAI base URL to fix model detection (RooCodeInc#6560)
  fix: improve handling of net::ERR_ABORTED errors in URL fetching (RooCodeInc#6635)
  Add swift files to fallback list (RooCodeInc#6724)
  Fix: Resolve Memory Leak in ChatView Virtual Scrolling Implementation (RooCodeInc#6697)
  Update CHANGELOG.md
  Changeset version bump (RooCodeInc#6738)
  Stop making types private (RooCodeInc#6737)
  Delete bad changeset (RooCodeInc#6736)
  Revert "Changesets config tweak (RooCodeInc#6733)" (RooCodeInc#6735)
  feat: add OpenAI GPT OSS model to Cerebras providers (RooCodeInc#6734)
  Changesets config tweak (RooCodeInc#6733)
  v3.25.7 (RooCodeInc#6730)
  feat: add GPT-OSS 120b and 20b models to Groq provider (RooCodeInc#6732)
  feat: clean up task list in HistoryPreview and History components (RooCodeInc#6687)
  feat: add support for Claude Opus 4.1 (claude-opus-4-1-20250805) (RooCodeInc#6728)
  Revert "Extension bridge (RooCodeInc#6677)" (RooCodeInc#6729)
  Redesigned Task Header (RooCodeInc#6561)
  feat: code indexing support multiple folder similar with task history (RooCodeInc#6204)
  fix: prevent MCP server creation when setting is disabled (RooCodeInc#6613)
  Add the fireworks AI provider (RooCodeInc#6652)
  Extension bridge (RooCodeInc#6677)
  fix: prevent empty mode names from being saved (fixes RooCodeInc#5766) (RooCodeInc#5767)
  Bump @roo-code/types to v1.44.0 (RooCodeInc#6675)
  feat(ui): Make mode selection dropdowns responsive (RooCodeInc#6422)
  Add Z AI provider (RooCodeInc#6657)
  use assistantMessageParser class instead of parseAssistantMessage (RooCodeInc#5341)
  style: update highlightLayer style and align to textarea (RooCodeInc#6648)
  Remove 'Initial Checkpoint' terminology, use 'Checkpoint' consistently (RooCodeInc#6643)
  Use @roo-code/cloud from npm (RooCodeInc#6611)
  Bump @roo-code/types to v1.43.0 (RooCodeInc#6640)
  Changing checkpoint timing and ensuring checkpoints work (RooCodeInc#6359)
  feat: conditionally include reminder section based on todo list config (RooCodeInc#6411)
  Fix the UI for approving chained commands (RooCodeInc#6623)
  Bump @roo-code/types to v1.42.0 (RooCodeInc#6610)
  Task and TaskProvider event emitter cleanup + a few new events (RooCodeInc#6606)
  Changeset version bump (RooCodeInc#6579)
  Update contributors list (RooCodeInc#6506)
  Release v3.25.6 (RooCodeInc#6578)
  feat: set horizon-beta model max tokens to 32k for OpenRouter (RooCodeInc#6577)
  chore: bump @roo-code/types to v1.41.0 (RooCodeInc#6568)
  Cloud: support syncing provider profiles from the cloud (RooCodeInc#6540)
  Changeset version bump (RooCodeInc#6565)
  Release v3.25.5 (RooCodeInc#6564)
  Add Qwen 3 Coder from Cerebras (RooCodeInc#6562)
  fix: Fix VB.NET indexing by implementing fallback chunking system (RooCodeInc#6552)
  More tolerant search/replace match (RooCodeInc#6537)
  Clean up the auto-approve UI (RooCodeInc#6538)
  fix: linter not applied to locales/*/README.md (RooCodeInc#6477)
  Revert "Migrate evals database when deploying roo-code-website" (RooCodeInc#6525)
  Phase 1 website updates (RooCodeInc#6085)
  Cloud service cleanup callbacks / move to events (RooCodeInc#6519)
  Add Cerebras as a provider (RooCodeInc#6392)
  fix: handle Qdrant deletion errors gracefully to prevent indexing interruption (RooCodeInc#6296)
  Add auto-approved cost limits (RooCodeInc#6484)
  fix: LM Studio model context length (RooCodeInc#5075) (RooCodeInc#6183)
  fix: restore message sending when clicking save button (RooCodeInc#6487)
  Handle more variations of chaining and subshell command validation (RooCodeInc#6486)
  fix: improve Claude Code ENOENT error handling with installation guidance (RooCodeInc#5867)
  Skip interpolation for non-existent slash commands (RooCodeInc#6475)
  Changeset version bump (RooCodeInc#6474)
  Release v3.25.4 (RooCodeInc#6473)
  Revert experiments with nightly marketplace config (RooCodeInc#6472)
  feat: set horizon-alpha model max tokens to 32k for OpenRouter (RooCodeInc#6470)
  fix: Remove misleading task resumption message (RooCodeInc#5851)
  chore(deps): update dependency lint-staged to v16.1.2 (RooCodeInc#4965)
  fix(deps): update dependency recharts to v2.15.4 (RooCodeInc#4971)
  Fix: Kill button for execute_command tool (RooCodeInc#6457)
  feat: add zai-org/GLM-4.5-FP8 model to Chutes AI provider (RooCodeInc#6441)
  Update contributors list (RooCodeInc#6360)
  chore(deps): update dependency @changesets/cli to v2.29.5 (RooCodeInc#4936)
  Migrate evals database when deploying roo-code-website (RooCodeInc#6146)
  Miscellaneous cleanup (RooCodeInc#6453)
  feat: add symlink support for AGENTS.md file loading (RooCodeInc#6326)
  feat: diagnose nightly freeze by disabling contributes (RooCodeInc#6450)
  fix: resolve navigator global error by updating mammoth and bluebird dependencies (RooCodeInc#6363)
  Add pattern to support Databricks /invocations endpoints (RooCodeInc#6317)
  feat: enhance token counting by extracting text from messages using VSCode LM API (RooCodeInc#6424)
  feat: Add Task History Context to Prompt Enhancement (RooCodeInc#6343)
  Support new LLM provider: Doubao (RooCodeInc#6345)
  fix: Use separate changelog for nightly builds to prevent marketplace freezing (RooCodeInc#6449)
  feat: auto-refresh marketplace data when organization settings change (RooCodeInc#6446)
  feat: add SambaNova provider integration (RooCodeInc#6188)
  Changeset version bump (RooCodeInc#6438)
  chore: add changeset for v3.25.3 (RooCodeInc#6435)
  docs: clarify apply_diff tool descriptions to emphasize surgical edits (RooCodeInc#6278)
  fix(chat): Prevent input clearing when clicking chat buttons (RooCodeInc#6222)
  Remove event types mention from PR reviewer rules (RooCodeInc#6428)
  Update the auto-translate prompt (RooCodeInc#6430)
  Hide Gemini checkboxes on the welcome view (RooCodeInc#6415)
  Remove "(prev Roo Cline)" from extension title in all languages (RooCodeInc#6426)
  feat: add translation check action to pull_request.opened event (RooCodeInc#6393)
  Allow queueing images (RooCodeInc#6414)
  Add docs link for slash commands (RooCodeInc#6409)
  Update PR reviewer rules and mode configuration (RooCodeInc#6391)
  feat: increase Claude Code default max output tokens to 16k (RooCodeInc#6312)
  Changeset version bump (RooCodeInc#6390)
  Release v3.25.2 (RooCodeInc#6389)
  Fix zap hover (RooCodeInc#6388)
  fix: show diff view before approval when PREVENT_FOCUS_DISRUPTION is disabled (RooCodeInc#6386)
  Bump types to 1.40.0 (RooCodeInc#6387)
  Cloud: add organization MCP controls (RooCodeInc#6378)
  Changeset version bump (RooCodeInc#6385)
  Release v3.25.1 (RooCodeInc#6384)
  feat: add zai-org/GLM-4.5-Air model to Chutes AI provider (RooCodeInc#6377)
  Improvements to subshell validation (RooCodeInc#6379)
  docs: update docs extractor mode configuration and rules (RooCodeInc#6373)
  Changeset version bump (RooCodeInc#6362)
  Update contributors list (RooCodeInc#6302)
  fix: exclude rules-{slug} folder from mode export paths (RooCodeInc#6186)
  chore: add changeset for v3.25.0 (RooCodeInc#6358)
  feat: make PR reviewer rules generic (RooCodeInc#6357)
  Remove duplicated assignment (RooCodeInc#6355)
  fix(environment): Filter out non-text tab inputs (RooCodeInc#6350)
  Add .roomotes.yml (RooCodeInc#6346)
  Better command highlighting (RooCodeInc#6336)
  Command argument hints and change release engineer to a command (RooCodeInc#6333)
  Ensure form-data >= 4.0.4 (RooCodeInc#6332)
  feat(tools): add image support to read_file tool (RooCodeInc#5172)
  feat: Add experimental setting to prevent editor focus disruption (RooCodeInc#6214)
  feat: make task mode sticky to task (RooCodeInc#6177)
  feat: add ESC key handling for modes, API provider, and indexing settings popovers (RooCodeInc#6175)
  Parse mentions from all user inputs (RooCodeInc#6331)
  Support inserting mentions after a slash command (RooCodeInc#6327)
  feat: Update PR reviewer mode to use todo lists and GitHub CLI (RooCodeInc#6328)
  Restore PR reviewer (RooCodeInc#6324)
  Fix slash command highlighting (RooCodeInc#6325)
  fix: empty README.vscode.md to test marketplace freezing issue (RooCodeInc#6315)
  Add support for slash command frontmatter descriptions (RooCodeInc#6314)
  feat(mode-writer): add validation and cohesion checking for mode creation (RooCodeInc#6313)
  fix: normalize Windows paths to forward slashes in mode export (RooCodeInc#6308)
  Improve issue-writer mode initialization workflow (RooCodeInc#6311)
  Add a UI for managing slash commands (RooCodeInc#6286)
  feat: add prompt caching support for LiteLLM (RooCodeInc#5791) (RooCodeInc#6074)
  fix: remove demo GIF from VS Code README to prevent marketplace freezing (RooCodeInc#6305)
  fix: hide Test ErrorBoundary button in production mode (RooCodeInc#6216)
  Fix keyboard shortcuts for non-QWERTY layouts (RooCodeInc#6162)
  Revert "fix: optimize README for VS Code marketplace to prevent UI freezing" (RooCodeInc#6303)
  fix: optimize README for VS Code marketplace to prevent UI freezing (RooCodeInc#6275)
  Feat: Adding Gemini tools - URL Context and Grounding with Google Search (RooCodeInc#5959)
  feat: Add support for message queueing (RooCodeInc#6167)
  Update issue writer rules and workflow configurations (RooCodeInc#6230)
  feat: sync API config selector style with mode selector from PR RooCodeInc#6140 (RooCodeInc#6148)
  feat: Add search functionality to mode selector popup and reorganize layout (RooCodeInc#6140)
  Support for custom slash commands (RooCodeInc#6263)
  fix: prevent scrollbar flickering in chat view during content streaming (RooCodeInc#6266)
  test: update list-files test for fixed hidden files bug (RooCodeInc#6261)
  fix: add text wrapping to command patterns in Manage Command Permissions (RooCodeInc#6255)
  # fix: list_files recursive mode now works for dot directories (RooCodeInc#5176)
  fix: restore working settings link in command permissions tooltip (RooCodeInc#6253)
  feat: add markdown table rendering support (RooCodeInc#6252)
  Changeset version bump (RooCodeInc#6239)
  Kick off release (RooCodeInc#6238)
  Update contributors list (RooCodeInc#6110)
  Delete .changeset/kind-horses-sniff.md
  chore: add changeset for v3.24.0 (RooCodeInc#6237)
  Clean up some of the hugging face provider settings (RooCodeInc#6236)
  refactor: consolidate HuggingFace models API into providers/fetchers (RooCodeInc#6228)
  fix: add error message when no workspace folder is open for code indexing (RooCodeInc#6227)
  Hugging Face provider: add more details (RooCodeInc#6190)
  fix(chat): Cancel auto-approve timer when editing follow-up suggestion (RooCodeInc#6226)
  fix: prevent duplicate command patterns by trimming full command (RooCodeInc#6224)
  fix: restore list styles for markdown lists in chat interface (RooCodeInc#6095)
  feat: Add terminal command permissions UI to chat interface (RooCodeInc#5480) (RooCodeInc#5798)
  fix: respect maxReadFileLine setting for file mentions to prevent context exhaustion (RooCodeInc#6073)
  Add support for bedrock api keys (RooCodeInc#6132)
  Fix Ollama API URL normalization by removing trailing slashes (RooCodeInc#6079)
  fix: allow auto-approve checkbox to be toggled at any time (RooCodeInc#6061)
  feat: add efficiency warning for single SEARCH/REPLACE blocks in apply_diff (RooCodeInc#6055)
  Add type for allowing org members to view all tasks (RooCodeInc#6193)
  Expose default modes in @roo-code/types (RooCodeInc#6184)
  feat: add Google Analytics tag to marketing website (RooCodeInc#6179)
  fix: Fix Hugging Face provider setup not transitioning from welcome view (RooCodeInc#6173)
  feat: add confirmation dialog and proper cleanup for marketplace mode removal (RooCodeInc#6136)
  feat: add support for Agent Rules standard via AGENTS.md (RooCodeInc#5966) (RooCodeInc#5969)
  Update next.config.ts
  Fix/website logo theme persistence (RooCodeInc#6040)
  basic hugging face provider (RooCodeInc#6134)
  feat: Add settings to control diagnostic messages (RooCodeInc#5524) (RooCodeInc#5582)
  Changeset version bump (RooCodeInc#6131)
  chore: add changeset for v3.23.19 (RooCodeInc#6130)
  Roo Code Cloud Waitlist CTAs (RooCodeInc#6104)
  Smarter auto-deny (RooCodeInc#6123)
  Split commands on newlines (RooCodeInc#6121)
  Changeset version bump (RooCodeInc#6111)
  Update contributors list (RooCodeInc#5699)
  chore: add changeset for v3.23.18 (RooCodeInc#6109)
  debug: Add ErrorBoundary component for better error handling (RooCodeInc#5085)
  fix: resolve 'Bad substitution' error in command parsing (RooCodeInc#5743)
  Fix todo list toggle not working (RooCodeInc#6103)
  Use SIGKILL for command execution timeouts in the "execa" variant (RooCodeInc#6071)
  Changeset version bump (RooCodeInc#6092)
  Release v3.23.17 (RooCodeInc#6091)
  feat: add merge-resolver mode for intelligent conflict resolution (RooCodeInc#6090)
  fix: add Git installation check for checkpoints feature (RooCodeInc#3109) (RooCodeInc#5920)
  feat: add Qwen/Qwen3-235B-A22B-Instruct-2507 model to Chutes AI provider (RooCodeInc#6052)
  feat: add llama-4-maverick model to Vertex AI provider (RooCodeInc#5808) (RooCodeInc#6023)
  docs: clarify when to use update_todo_list tool (RooCodeInc#5926)
  fix: add case sensitivity mention to suggested fixes in apply_diff error message (RooCodeInc#6076)
  Add jump icon for newly created files (RooCodeInc#5738)
  Fix evals; broken by RooCodeInc#5865 (RooCodeInc#6065)
  Bugfix: Cloud: be more specific about session error codes (RooCodeInc#6051)
  feat: add moonshot provider (RooCodeInc#6046)
  Add todo list tool enable checkbox to provider advanced settings (RooCodeInc#6032)
  fix: add bedrock to ANTHROPIC_STYLE_PROVIDERS and restore vertex Claude model checking (RooCodeInc#6019)
  Update the max_tokens fallback logic in the sliding window (RooCodeInc#5993)
  fix: sort symlinked rules files by symlink names, not target names (RooCodeInc#5903)
  fix: properly distinguish between user cancellations and API failures (RooCodeInc#6025)
  fix: resolve global mode export not including rules files (RooCodeInc#5834) (RooCodeInc#5837)
  fix: add run parameter to vitest command in rules (RooCodeInc#5991)
  feat: auto-omit MCP content when no servers are configured (RooCodeInc#5889)
  fix: move context condensing prompt to Prompts section (RooCodeInc#4924) (RooCodeInc#5279)
  feat: add configurable timeout for evals (5-10 min) (RooCodeInc#5865)
  fix: enable export, share, and copy buttons during API operations (RooCodeInc#5324) (RooCodeInc#5849)
  fix: add character limit to prevent terminal output context explosion (RooCodeInc#5777)
  feat: Add Mistral embedding provider (RooCodeInc#5932) (RooCodeInc#5946)
  Changeset version bump (RooCodeInc#5953)
  Release v3.23.16 (RooCodeInc#5952)
  feat: enhance release engineer mode to include issue numbers and reporters in changelog (RooCodeInc#5940)
  feat: add batch limiting to code indexer (RooCodeInc#5891)
  feat: add global rate limiting for OpenAI-compatible embeddings (RooCodeInc#5854)
  feat: mark non-English translation files as linguist-generated (RooCodeInc#5943)
  Fix Docker port conflicts for evals services (RooCodeInc#5909)
  Changeset version bump (RooCodeInc#5935)
  chore: add changeset for v3.23.15 patch release (RooCodeInc#5934)
  Added YouTube to website footer (RooCodeInc#5929)
  Add command timeout allowlist with IPC support (RooCodeInc#5910)
  feat: add configurable delay for Go diagnostics to prevent premature error reporting (RooCodeInc#5863)
  feat: Add Issue Investigator mode and enhance Issue Writer mode (RooCodeInc#5913)
  Prevent completion with open todos (RooCodeInc#5716)
  feat: move marketplace icon from overflow menu to top navigation (RooCodeInc#5864)
  feat: Add description and whenToUse fields to custom modes in .roomodes (RooCodeInc#5862)
  fix: detect Claude models by name for API protocol selection (RooCodeInc#5840)
  Fix/issue fixer pr template (RooCodeInc#5839)
  Changeset version bump (RooCodeInc#5836)
  Log API-initiated tasks to a tmp directory (RooCodeInc#5833)
  refactor: remove orchestrator modes and update GitHub CLI usage (RooCodeInc#5832)
  Changeset version bump (RooCodeInc#5829)
  v3.23.13 (RooCodeInc#5827)
  Allow command execution timeout to be set via IPC task execution (RooCodeInc#5825)
  feat: Update Ollama UI to use text inputs (RooCodeInc#5818)
  Message edit/delete overhaul (RooCodeInc#5538)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request lgtm This PR has been approved by a maintainer PR - Needs Review size:L This PR changes 100-499 lines, ignoring generated files.

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

5 participants