fix: override context window for MiniMax/Kimi free models#5568
Merged
kevinvandijk merged 3 commits intoKilo-Org:mainfrom Feb 21, 2026
Merged
fix: override context window for MiniMax/Kimi free models#5568kevinvandijk merged 3 commits intoKilo-Org:mainfrom
kevinvandijk merged 3 commits intoKilo-Org:mainfrom
Conversation
Enforces a 200k context window for and to prevent aggressive truncation caused by incorrect API limitations.
🦋 Changeset detectedLatest commit: 97f8325 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
jeremylongshore
added a commit
to jeremylongshore/kilocode
that referenced
this pull request
Feb 15, 2026
Mirror: fix: override context window for MiniMax/Kimi free models (Kilo-Org#5568)
Override context window for MiniMax/Kimi free models.
| } | ||
|
|
||
| // kilocode_change start | ||
| if (id.includes("minimax-2.1:free") || id.includes("kimi-2.5:free")) { |
Contributor
There was a problem hiding this comment.
WARNING: Missing test coverage for this new override.
Every other model-specific override in parseOpenRouterModel has a corresponding test in openrouter.spec.ts (e.g., horizon-alpha, horizon-beta, claude-sonnet-4.6). This new context window override for MiniMax/Kimi free models should also have test coverage.
Consider adding tests like:
it("overrides context window for minimax-2.1:free models", () => {
const mockModel = {
name: "MiniMax 2.1 Free",
description: "Test model",
context_length: 1000000,
max_completion_tokens: 8192,
pricing: { prompt: "0", completion: "0" },
}
const result = parseOpenRouterModel({
id: "minimax/minimax-2.1:free",
model: mockModel,
inputModality: ["text"],
outputModality: ["text"],
maxTokens: 8192,
})
expect(result.contextWindow).toBe(200000)
})And a similar test for kimi-2.5:free.
Contributor
Code Review SummaryStatus: 1 Issue Found | Recommendation: Address before merge Overview
Issue Details (click to expand)WARNING
Files Reviewed (2 files)
|
kevinvandijk
approved these changes
Feb 21, 2026
This was referenced Feb 21, 2026
Closed
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Enforces a 200k context window for and to prevent aggressive truncation caused by incorrect API limitations.
Context
fixes #5566
Implementation
I resolved the issue by patching the model information parsing logic to manually override the reported context window for the affected free-tier models.
Screenshots
How to Test
Launch the Extension:
Open the project in VS Code.
Press F5 to start a debug session (or build and install the .vsix).
Select the Model:
Open the extension sidebar.
Go to Settings (gear icon) -> Provider.
Select KiloCode (or OpenRouter).
In the Model dropdown, select either Mini-Max 2.1:free or Kimi 2.5:free.
Verify Context Limits:
Start a chat session.
Look at the Context Window Indicator (the progress bar at the bottom of the chat interface).
Before Fix: The total capacity (right-side number) would show approximately 32k (or roughly 32,768).
After Fix: The total capacity should now display ~200k (specifically 200,000).
Functional Test:
Paste a large amount of text (e.g., >40k tokens) into the chat.
Verify that the "Context Truncated" warning does not appear and the extension sends the full context to the model.
Get in Touch