-
Notifications
You must be signed in to change notification settings - Fork 2.4k
Add a couple more free models to the Roo provider #8304
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I found some issues that need attention:
- RooModelId union doesn't include the new IDs; consider deriving from the model map or extend the union.
- Grok 4 Fast description says "multimodal" while supportsImages is false; either enable images or adjust wording.
- DeepSeek description mentions 128K but contextWindow is 163,840; align wording (~160K).
| description: | ||
| "A versatile agentic coding stealth model that supports image inputs, accessible for free through Roo Code Cloud for a limited time. (Note: the free prompts and completions are logged by the model provider and used to improve the model.)", | ||
| }, | ||
| "xai/grok-4-fast": { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: New model IDs were added but RooModelId (see line 3) still only covers two IDs. This can cause downstream typing friction. Suggest either:
- Redefine RooModelId to derive from the model map (keyof typeof rooModels), placing the type after the constant; or
- Extend the union to include "xai/grok-4-fast" and "deepseek/deepseek-chat-v3.1".
| "xai/grok-4-fast": { | ||
| maxTokens: 30_000, | ||
| contextWindow: 2_000_000, | ||
| supportsImages: false, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: Description below calls Grok 4 Fast “multimodal”, but supportsImages is false. If images aren’t supported through the Roo provider for this free tier, consider removing “multimodal” from the description to prevent confusion.
| inputPrice: 0, | ||
| outputPrice: 0, | ||
| description: | ||
| "Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. (Note: prompts and completions are logged by xAI and used to improve the model.)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P2: The description calls Grok 4 Fast “multimodal” but images are disabled. If that’s intentional (no image support via Roo), consider adjusting the wording to avoid “multimodal”.
| inputPrice: 0, | ||
| outputPrice: 0, | ||
| description: | ||
| "Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. (Note: prompts and completions are logged by xAI and used to improve the model.)", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| "Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. (Note: prompts and completions are logged by xAI and used to improve the model.)", | |
| "Grok 4 Fast is xAI's latest model with SOTA cost-efficiency and a 2M token context window. (Note: prompts and completions are logged by xAI and used to improve the model.)", |
| inputPrice: 0, | ||
| outputPrice: 0, | ||
| description: | ||
| "DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active). It extends the DeepSeek-V3 base with a two-phase long-context training process, reaching up to 128K tokens, and uses FP8 microscaling for efficient inference.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
P3: Description says “up to 128K tokens” but contextWindow is 163,840. Consider aligning the description with the configured window (e.g., ~160K).
| inputPrice: 0, | ||
| outputPrice: 0, | ||
| description: | ||
| "DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active). It extends the DeepSeek-V3 base with a two-phase long-context training process, reaching up to 128K tokens, and uses FP8 microscaling for efficient inference.", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| "DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active). It extends the DeepSeek-V3 base with a two-phase long-context training process, reaching up to 128K tokens, and uses FP8 microscaling for efficient inference.", | |
| "DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active). It extends the DeepSeek-V3 base with a two-phase long-context training process, reaching up to 160K tokens, and uses FP8 microscaling for efficient inference.", |

Important
Add
xai/grok-4-fastanddeepseek/deepseek-chat-v3.1models to Roo provider inroo.ts.xai/grok-4-fastanddeepseek/deepseek-chat-v3.1toRooModelIdtype inroo.ts.rooModelsinroo.tswith configurations forxai/grok-4-fastanddeepseek/deepseek-chat-v3.1, includingmaxTokens,contextWindow,supportsImages,supportsPromptCache,inputPrice,outputPrice, anddescription.This description was created by
for d6ea973. You can customize this summary. It will automatically update as commits are pushed.