Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
26 changes: 25 additions & 1 deletion packages/types/src/providers/roo.ts
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
import type { ModelInfo } from "../model.js"

export type RooModelId = "xai/grok-code-fast-1" | "roo/code-supernova"
export type RooModelId =
| "xai/grok-code-fast-1"
| "roo/code-supernova"
| "xai/grok-4-fast"
| "deepseek/deepseek-chat-v3.1"

export const rooDefaultModelId: RooModelId = "xai/grok-code-fast-1"

Expand All @@ -25,4 +29,24 @@ export const rooModels = {
description:
"A versatile agentic coding stealth model that supports image inputs, accessible for free through Roo Code Cloud for a limited time. (Note: the free prompts and completions are logged by the model provider and used to improve the model.)",
},
"xai/grok-4-fast": {
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: New model IDs were added but RooModelId (see line 3) still only covers two IDs. This can cause downstream typing friction. Suggest either:

  • Redefine RooModelId to derive from the model map (keyof typeof rooModels), placing the type after the constant; or
  • Extend the union to include "xai/grok-4-fast" and "deepseek/deepseek-chat-v3.1".

maxTokens: 30_000,
contextWindow: 2_000_000,
supportsImages: false,
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Description below calls Grok 4 Fast “multimodal”, but supportsImages is false. If images aren’t supported through the Roo provider for this free tier, consider removing “multimodal” from the description to prevent confusion.

supportsPromptCache: false,
inputPrice: 0,
outputPrice: 0,
description:
"Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. (Note: prompts and completions are logged by xAI and used to improve the model.)",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: The description calls Grok 4 Fast “multimodal” but images are disabled. If that’s intentional (no image support via Roo), consider adjusting the wording to avoid “multimodal”.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"Grok 4 Fast is xAI's latest multimodal model with SOTA cost-efficiency and a 2M token context window. (Note: prompts and completions are logged by xAI and used to improve the model.)",
"Grok 4 Fast is xAI's latest model with SOTA cost-efficiency and a 2M token context window. (Note: prompts and completions are logged by xAI and used to improve the model.)",

},
"deepseek/deepseek-chat-v3.1": {
maxTokens: 16_384,
contextWindow: 163_840,
supportsImages: false,
supportsPromptCache: false,
inputPrice: 0,
outputPrice: 0,
description:
"DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active). It extends the DeepSeek-V3 base with a two-phase long-context training process, reaching up to 128K tokens, and uses FP8 microscaling for efficient inference.",
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P3: Description says “up to 128K tokens” but contextWindow is 163,840. Consider aligning the description with the configured window (e.g., ~160K).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
"DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active). It extends the DeepSeek-V3 base with a two-phase long-context training process, reaching up to 128K tokens, and uses FP8 microscaling for efficient inference.",
"DeepSeek-V3.1 is a large hybrid reasoning model (671B parameters, 37B active). It extends the DeepSeek-V3 base with a two-phase long-context training process, reaching up to 160K tokens, and uses FP8 microscaling for efficient inference.",

},
} as const satisfies Record<string, ModelInfo>
Loading