Integration test: PRs #5370 + #5660 + #5704 merged together#16
Conversation
…c-provider
When using anthropic-provider mode, the read_file tool's line_ranges parameter
was being converted from snake_case to camelCase and from tuple format to object
format before saving to conversation history. This caused inconsistency between
the API response format and the saved history format.
Changes:
- Add rawInput field to ToolUse interface to preserve original API parameters
- Save original args to rawInput in NativeToolCallParser.parseToolCall()
- Use rawInput (if available) when saving tool calls to history in Task.ts
- Add test case to verify rawInput preserves original line_ranges format
This ensures line_ranges stays as [[1, 50]] instead of being converted to
lineRanges: [{ start: 1, end: 50 }] in the conversation history.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Make model search case-insensitive so users can find models regardless of casing.
For example, searching for "kimi k2.5" will now find models like "Kimi-K2.5-Instruct".
Changes:
- Add custom filtering with toLowerCase() for case-insensitive search
- Disable default Command filtering with shouldFilter={false}
- Use filtered model lists for displaying results
Fixes Kilo-Org#5694
Fixes Kimi model search in OpenAI Compatible provider by: 1. Search normalization - Model search now normalizes dashes, spaces, and underscores for fuzzy matching: - matches - matches 2. Kimi fallback models - Kimi models are now included as fallback when using OpenAI Compatible with Kimi endpoints: - Detects Kimi endpoints (kimi, moonshot, api.moonshot.ai/cn) - Always includes all Kimi models even if API fetch fails - Uses as default model for Kimi endpoints
Review Summary by QodoIntegrate Mistral SDK, preserve API formats, and improve model search
WalkthroughsDescription• Migrate Mistral FIM streaming to official SDK with improved error handling • Preserve original API parameter formats in conversation history for consistency • Implement case-insensitive model search with normalization for better discoverability • Add Kimi model fallback support for OpenAI Compatible provider Diagramflowchart LR
A["Mistral FIM Handler"] -->|"Use SDK client"| B["Official Mistral SDK"]
C["Tool Call Parser"] -->|"Preserve rawInput"| D["API History"]
E["Model Search"] -->|"Normalize case/dashes"| F["Filtered Results"]
G["OpenAI Compatible"] -->|"Detect Kimi endpoint"| H["Kimi Fallback Models"]
File Changes1. src/api/providers/mistral.ts
|
Summary of ChangesHello @jeremylongshore, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request serves as an integration test, combining three previously developed features to confirm their compatibility and prevent regressions. It validates the migration of Mistral's FIM streaming to its official SDK, enhances the ModelPicker's search functionality with case-insensitivity and Kimi-specific fallbacks, and refines tool call argument handling to maintain original data formats for API history. The combined tests confirm that these changes introduce no new interaction failures. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughCase-insensitive model search with dash/space normalization was added to ModelPicker; Kimi/Moonshot models are used as a fallback for OpenAI-compatible endpoints; Mistral provider switched to SDK-based streaming with tool-call and FIM support; tool calls now preserve original API inputs via a new Changes
Sequence Diagram(s)mermaid Task->>Provider: request FIM stream (model, prompt, suffix, tools...) Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
Failed to generate code suggestions for PR |
There was a problem hiding this comment.
Code Review
This pull request combines three separate improvements: preserving original tool call arguments for better history consistency, refactoring the Mistral provider to use the official SDK, and enhancing the model picker with case-insensitive search and Kimi/Moonshot endpoint detection. The changes are well-implemented and improve code quality and user experience. I have one suggestion to make the endpoint detection logic more robust.
| const isKimiEndpoint = | ||
| apiConfiguration.openAiBaseUrl?.toLowerCase().includes("kimi") || | ||
| apiConfiguration.openAiBaseUrl?.toLowerCase().includes("moonshot") || | ||
| apiConfiguration.openAiBaseUrl?.toLowerCase().includes("api.moonshot.ai") || | ||
| apiConfiguration.openAiBaseUrl?.toLowerCase().includes("api.moonshot.cn") |
There was a problem hiding this comment.
The current logic for detecting a Kimi/Moonshot endpoint relies on simple string inclusion, which could lead to false positives if a user's custom proxy URL happens to contain "kimi" or "moonshot". A more robust approach would be to parse the URL and check the hostname. This would make the detection more accurate and prevent incorrect model lists from being shown.
Wrapping this logic in useMemo would also prevent re-calculating it on every render.
| const isKimiEndpoint = | |
| apiConfiguration.openAiBaseUrl?.toLowerCase().includes("kimi") || | |
| apiConfiguration.openAiBaseUrl?.toLowerCase().includes("moonshot") || | |
| apiConfiguration.openAiBaseUrl?.toLowerCase().includes("api.moonshot.ai") || | |
| apiConfiguration.openAiBaseUrl?.toLowerCase().includes("api.moonshot.cn") | |
| const isKimiEndpoint = useMemo(() => { | |
| const baseUrl = apiConfiguration.openAiBaseUrl?.toLowerCase(); | |
| if (!baseUrl) return false; | |
| try { | |
| // Ensure there's a protocol for URL parsing, default to https | |
| const url = new URL(baseUrl.startsWith("http") ? baseUrl : `https://${baseUrl}`); | |
| const hostname = url.hostname; | |
| // Check for official Kimi/Moonshot domains | |
| return hostname.endsWith("moonshot.ai") || hostname.endsWith("moonshot.cn") || hostname.includes("kimi"); | |
| } catch (e) { | |
| // Fallback to simple string matching for invalid URLs or during input | |
| return baseUrl.includes("kimi") || baseUrl.includes("moonshot"); | |
| } | |
| }, [apiConfiguration.openAiBaseUrl]); |
Tests now mock this.client.fim.stream() (Mistral SDK) instead of global.fetch + streamSse, matching the PR Kilo-Org#5660 refactor. - yields chunks correctly: mocks SDK async iterable - handles errors correctly: mocks SDK rejection + TelemetryService - uses correct endpoint: verifies Mistral constructor args - uses custom codestral URL: verifies constructor with custom URL All 8 tests pass (4 fimSupport + 4 streamFim).
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@webview-ui/src/components/settings/providers/OpenAICompatible.tsx`:
- Around line 59-65: The spread order in the modelsToUse computed value causes
generic fetched model entries (openAiModels which include
openAiModelInfoSaneDefaults) to overwrite curated moonshotModels for known Kimi
IDs; update the useMemo logic in modelsToUse so that moonshotModels are spread
after fetchedModels (i.e., { ...fetchedModels, ...moonshotModels }) when
isKimiEndpoint is true, ensuring curated moonshotModels take precedence while
still including any additional models returned by openAiModels.
🧹 Nitpick comments (1)
webview-ui/src/components/settings/providers/OpenAICompatible.tsx (1)
53-57: Redundant URL substring checks.Lines 56–57 (
.includes("api.moonshot.ai")/.includes("api.moonshot.cn")) are already covered by the.includes("moonshot")check on line 55. They can be removed without changing behavior.♻️ Suggested simplification
const isKimiEndpoint = apiConfiguration.openAiBaseUrl?.toLowerCase().includes("kimi") || - apiConfiguration.openAiBaseUrl?.toLowerCase().includes("moonshot") || - apiConfiguration.openAiBaseUrl?.toLowerCase().includes("api.moonshot.ai") || - apiConfiguration.openAiBaseUrl?.toLowerCase().includes("api.moonshot.cn") + apiConfiguration.openAiBaseUrl?.toLowerCase().includes("moonshot")
| const modelsToUse = useMemo(() => { | ||
| const fetchedModels = openAiModels ?? {} | ||
| if (isKimiEndpoint) { | ||
| return { ...moonshotModels, ...fetchedModels } | ||
| } | ||
| return fetchedModels | ||
| }, [isKimiEndpoint, openAiModels]) |
There was a problem hiding this comment.
Spread order causes curated Kimi model info to be overwritten by generic defaults.
When a Kimi endpoint returns model IDs that match entries in moonshotModels (e.g. "kimi-k2-thinking"), the fetched models—which all carry openAiModelInfoSaneDefaults (see line 132)—will overwrite the curated moonshot entries that contain accurate pricing, context windows, and capability flags.
Reverse the spread so curated data takes precedence over generic defaults:
🐛 Proposed fix
const modelsToUse = useMemo(() => {
const fetchedModels = openAiModels ?? {}
if (isKimiEndpoint) {
- return { ...moonshotModels, ...fetchedModels }
+ return { ...fetchedModels, ...moonshotModels }
}
return fetchedModels
}, [isKimiEndpoint, openAiModels])If the intent is to also surface any additional models returned by the API that aren't in moonshotModels, this ordering achieves that while still preserving the curated metadata for known models.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const modelsToUse = useMemo(() => { | |
| const fetchedModels = openAiModels ?? {} | |
| if (isKimiEndpoint) { | |
| return { ...moonshotModels, ...fetchedModels } | |
| } | |
| return fetchedModels | |
| }, [isKimiEndpoint, openAiModels]) | |
| const modelsToUse = useMemo(() => { | |
| const fetchedModels = openAiModels ?? {} | |
| if (isKimiEndpoint) { | |
| return { ...fetchedModels, ...moonshotModels } | |
| } | |
| return fetchedModels | |
| }, [isKimiEndpoint, openAiModels]) |
🤖 Prompt for AI Agents
In `@webview-ui/src/components/settings/providers/OpenAICompatible.tsx` around
lines 59 - 65, The spread order in the modelsToUse computed value causes generic
fetched model entries (openAiModels which include openAiModelInfoSaneDefaults)
to overwrite curated moonshotModels for known Kimi IDs; update the useMemo logic
in modelsToUse so that moonshotModels are spread after fetchedModels (i.e., {
...fetchedModels, ...moonshotModels }) when isKimiEndpoint is true, ensuring
curated moonshotModels take precedence while still including any additional
models returned by openAiModels.
Test Fix: mistral-fim.spec.tsUpdated tests to mock What changed
Results (combined branch: PRs Kilo-Org#5370 + Kilo-Org#5660 + Kilo-Org#5704 + test fix)
Commit: |
process.cwd() + "packages/agent-runtime/..." doubled the package dir when vitest runs from the package root. Use __dirname instead. Pre-existing bug, not from any reviewed PR.
Code Review by Qodo
1. Unsafe data.choices[0] access
|
| const data = ev.data | ||
|
|
||
| const content = data.choices[0]?.delta.content | ||
| if (typeof content === "string") { |
There was a problem hiding this comment.
1. Unsafe data.choices[0] access 📘 Rule violation ⛯ Reliability
streamFim now indexes data.choices[0] without guarding when choices is missing, which can throw at runtime and break streaming. This violates the requirement to explicitly handle null/empty edge cases.
Agent Prompt
## Issue description
`streamFim` can throw when `ev.data.choices` is missing because it uses `data.choices[0]` without optional chaining on `choices`.
## Issue Context
The previous implementation used safe optional chaining (`data.choices?.[0]?.delta?.content`). The new SDK event payload may omit fields in some chunks or error cases.
## Fix Focus Areas
- src/api/providers/mistral.ts[282-285]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| let response | ||
| try { | ||
| response = await this.client.fim.stream(request) | ||
| } catch (error) { | ||
| const errorMessage = error instanceof Error ? error.message : String(error) | ||
| const apiError = new ApiProviderError(errorMessage, this.providerName, model, "streamFim") | ||
| TelemetryService.instance.captureException(apiError) | ||
| throw new Error(`Mistral FIM completion error: ${errorMessage}`) | ||
| } |
There was a problem hiding this comment.
2. Mistral fim tests stale 🐞 Bug ⛯ Reliability
mistral-fim.spec.ts still mocks/ asserts the old fetch+SSE behavior, but streamFim now uses the Mistral SDK (client.fim.stream). This mismatch will keep CI failing until tests are updated to mock the SDK layer.
Agent Prompt
### Issue description
Unit tests in `mistral-fim.spec.ts` are written for the previous implementation (manual `fetch` + `streamSse`). The provider now uses the Mistral SDK’s `client.fim.stream`, so these tests will fail.
### Issue Context
`mistral.spec.ts` already demonstrates the pattern used in this repo for mocking the Mistral SDK (`@mistralai/mistralai`) by returning an async-iterable stream.
### Fix Focus Areas
- src/api/providers/__tests__/mistral-fim.spec.ts[7-191]
- src/api/providers/__tests__/mistral.spec.ts[11-48]
- src/api/providers/mistral.ts[250-305]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
Combined Integration Test
Merges 3 upstream PRs together to verify no interaction conflicts.
Combined Test Results
pnpm check-typespnpm lintpnpm test --continue*4 failures are all in
mistral-fim.spec.ts— same failures from PR Kilo-Org#5660 individually.No new interaction failures from merging all 3 PRs together.
Merge Conflicts Resolved
src/api/providers/mistral.ts— PR Use Mistral SDK inMistralHandler.streamFimKilo-Org/kilocode#5660 removesDEFAULT_HEADERSandstreamSseimports (replaced by SDK). Took PR's side.src/core/task/Task.ts— PR fix: preserve original line_ranges format in API history for anthropi… Kilo-Org/kilocode#5370 addsrawInputfallback; upstream main added deduplication. Kept both: deduplication logic + rawInput fallback chain.Conclusion
These 3 PRs are safe to merge independently or together. Zero interaction bugs.
Summary by CodeRabbit
New Features
Improvements
Tests