Skip to content

Integration test: PRs #5370 + #5660 + #5704 merged together#16

Merged
jeremylongshore merged 11 commits into
mainfrom
review/combined-batch-1
Feb 15, 2026
Merged

Integration test: PRs #5370 + #5660 + #5704 merged together#16
jeremylongshore merged 11 commits into
mainfrom
review/combined-batch-1

Conversation

@jeremylongshore
Copy link
Copy Markdown
Owner

@jeremylongshore jeremylongshore commented Feb 14, 2026

Combined Integration Test

Merges 3 upstream PRs together to verify no interaction conflicts.

PR Title Individual Result
#5370 fix: preserve original line_ranges format PASS (7,524 tests)
#5660 Use Mistral SDK in MistralHandler.streamFim FAIL (4 test failures — mocks)
#5704 fix: Improve Kimi model search + fallback PASS (7,831 tests)

Combined Test Results

Test Command Result Details
TypeScript pnpm check-types PASS 22/22 packages
Lint pnpm lint PASS 18/18 packages
Unit Tests pnpm test --continue PASS* 7,932 passed, 4 failed

*4 failures are all in mistral-fim.spec.ts — same failures from PR Kilo-Org#5660 individually.
No new interaction failures from merging all 3 PRs together.

Merge Conflicts Resolved

  1. src/api/providers/mistral.ts — PR Use Mistral SDK in MistralHandler.streamFim Kilo-Org/kilocode#5660 removes DEFAULT_HEADERS and streamSse imports (replaced by SDK). Took PR's side.
  2. src/core/task/Task.ts — PR fix: preserve original line_ranges format in API history for anthropi… Kilo-Org/kilocode#5370 adds rawInput fallback; upstream main added deduplication. Kept both: deduplication logic + rawInput fallback chain.

Conclusion

These 3 PRs are safe to merge independently or together. Zero interaction bugs.

Summary by CodeRabbit

  • New Features

    • Case-insensitive model search with dash/space normalization for improved discoverability.
    • Kimi models added as a fallback option in the OpenAI-compatible provider.
  • Improvements

    • Enhanced streaming support for the Mistral provider and more robust streaming error handling.
    • Tool-call history now preserves original input shapes and alias names for more accurate history.
  • Tests

    • Updated/added tests covering streaming behavior and tool-call history preservation.

congfeng.yin and others added 9 commits January 24, 2026 21:49
…c-provider

When using anthropic-provider mode, the read_file tool's line_ranges parameter
was being converted from snake_case to camelCase and from tuple format to object
format before saving to conversation history. This caused inconsistency between
the API response format and the saved history format.

Changes:
- Add rawInput field to ToolUse interface to preserve original API parameters
- Save original args to rawInput in NativeToolCallParser.parseToolCall()
- Use rawInput (if available) when saving tool calls to history in Task.ts
- Add test case to verify rawInput preserves original line_ranges format

This ensures line_ranges stays as [[1, 50]] instead of being converted to
lineRanges: [{ start: 1, end: 50 }] in the conversation history.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Make model search case-insensitive so users can find models regardless of casing.
For example, searching for "kimi k2.5" will now find models like "Kimi-K2.5-Instruct".

Changes:
- Add custom filtering with toLowerCase() for case-insensitive search
- Disable default Command filtering with shouldFilter={false}
- Use filtered model lists for displaying results

Fixes Kilo-Org#5694
Fixes Kimi model search in OpenAI Compatible provider by:

1. Search normalization - Model search now normalizes dashes, spaces,
   and underscores for fuzzy matching:
   -  matches
   -  matches

2. Kimi fallback models - Kimi models are now included as fallback when
   using OpenAI Compatible with Kimi endpoints:
   - Detects Kimi endpoints (kimi, moonshot, api.moonshot.ai/cn)
   - Always includes all Kimi models even if API fetch fails
   - Uses  as default model for Kimi endpoints
@qodo-code-review
Copy link
Copy Markdown

Review Summary by Qodo

Integrate Mistral SDK, preserve API formats, and improve model search

✨ Enhancement 🐞 Bug fix

Grey Divider

Walkthroughs

Description
• Migrate Mistral FIM streaming to official SDK with improved error handling
• Preserve original API parameter formats in conversation history for consistency
• Implement case-insensitive model search with normalization for better discoverability
• Add Kimi model fallback support for OpenAI Compatible provider
Diagram
flowchart LR
  A["Mistral FIM Handler"] -->|"Use SDK client"| B["Official Mistral SDK"]
  C["Tool Call Parser"] -->|"Preserve rawInput"| D["API History"]
  E["Model Search"] -->|"Normalize case/dashes"| F["Filtered Results"]
  G["OpenAI Compatible"] -->|"Detect Kimi endpoint"| H["Kimi Fallback Models"]
Loading

Grey Divider

File Changes

1. src/api/providers/mistral.ts ✨ Enhancement +30/-39

Migrate FIM streaming to Mistral SDK

src/api/providers/mistral.ts


2. src/core/assistant-message/NativeToolCallParser.ts 🐞 Bug fix +3/-0

Add rawInput field to preserve original formats

src/core/assistant-message/NativeToolCallParser.ts


3. src/core/assistant-message/__tests__/NativeToolCallParser.spec.ts 🧪 Tests +43/-0

Test rawInput preservation for line_ranges format

src/core/assistant-message/tests/NativeToolCallParser.spec.ts


View more (6)
4. src/core/task/Task.ts 🐞 Bug fix +5/-2

Use rawInput for API history consistency

src/core/task/Task.ts


5. src/shared/tools.ts 📝 Documentation +6/-0

Document rawInput field in ToolUse interface

src/shared/tools.ts


6. .changeset/fix-case-insensitive-model-search.md 📝 Documentation +10/-0

Document case-insensitive model search fix

.changeset/fix-case-insensitive-model-search.md


7. .changeset/fix-kimi-model-search.md 📝 Documentation +5/-0

Document Kimi model search improvements

.changeset/fix-kimi-model-search.md


8. webview-ui/src/components/settings/ModelPicker.tsx ✨ Enhancement +27/-7

Implement case-insensitive search with normalization

webview-ui/src/components/settings/ModelPicker.tsx


9. webview-ui/src/components/settings/providers/OpenAICompatible.tsx ✨ Enhancement +22/-3

Add Kimi endpoint detection and model fallback

webview-ui/src/components/settings/providers/OpenAICompatible.tsx


Grey Divider

Qodo Logo

@gemini-code-assist
Copy link
Copy Markdown

Summary of Changes

Hello @jeremylongshore, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request serves as an integration test, combining three previously developed features to confirm their compatibility and prevent regressions. It validates the migration of Mistral's FIM streaming to its official SDK, enhances the ModelPicker's search functionality with case-insensitivity and Kimi-specific fallbacks, and refines tool call argument handling to maintain original data formats for API history. The combined tests confirm that these changes introduce no new interaction failures.

Highlights

Changelog
  • .changeset/fix-case-insensitive-model-search.md
    • Added a new changeset file documenting the fix for case-insensitive model search in the ModelPicker.
  • .changeset/fix-kimi-model-search.md
    • Added a new changeset file documenting the fix for Kimi model search and the addition of Kimi models as fallback for OpenAI Compatible providers.
  • src/api/providers/mistral.ts
    • Removed DEFAULT_HEADERS and streamSse imports.
    • Refactored streamFim method to utilize the Mistral SDK for FIM completions.
    • Updated error handling and usage reporting to align with the SDK's response format.
  • src/core/assistant-message/NativeToolCallParser.ts
    • Introduced rawInput property to ToolCall objects, storing the original arguments to preserve their format (e.g., line_ranges as [[1, 50]]).
  • src/core/assistant-message/tests/NativeToolCallParser.spec.ts
    • Added a new test case to verify that rawInput correctly preserves the original line_ranges format for API history.
  • src/core/task/Task.ts
    • Modified the logic for retrieving tool call input, prioritizing toolUse.rawInput to ensure format consistency in API history, with fallbacks to nativeArgs and params.
  • src/shared/tools.ts
    • Added an optional rawInput property of type Record<string, unknown> to the ToolUse interface, explicitly for preserving original API input formats.
  • webview-ui/src/components/settings/ModelPicker.tsx
    • Implemented a normalizeForSearch utility function for case-insensitive and whitespace-agnostic model ID searching.
    • Updated preferredModelIds and restModelIds filtering to use the new normalizeForSearch logic.
    • Disabled default filtering for the Command component and adjusted CommandGroup headings based on search activity.
  • webview-ui/src/components/settings/providers/OpenAICompatible.tsx
    • Imported moonshotModels and moonshotDefaultModelId.
    • Added logic to detect Kimi/Moonshot endpoints based on the openAiBaseUrl.
    • Dynamically adjusted the modelsToUse and defaultModelId for the ModelPicker to include Moonshot models when a Kimi endpoint is detected.
Activity
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Feb 14, 2026

Warning

Rate limit exceeded

@jeremylongshore has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 18 minutes and 4 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

Case-insensitive model search with dash/space normalization was added to ModelPicker; Kimi/Moonshot models are used as a fallback for OpenAI-compatible endpoints; Mistral provider switched to SDK-based streaming with tool-call and FIM support; tool calls now preserve original API inputs via a new rawInput field.

Changes

Cohort / File(s) Summary
Changeset Documentation
​.changeset/fix-case-insensitive-model-search.md, ​.changeset/fix-kimi-model-search.md
Patch change entries added documenting case-insensitive model search and Kimi model fallback for OpenAI-compatible provider.
Model Search UI
webview-ui/src/components/settings/ModelPicker.tsx
Implemented case-insensitive search with dash/space normalization; filter logic refactored to drive preferred/all groups and disabled internal Command filtering.
OpenAI Compatible Provider UI
webview-ui/src/components/settings/providers/OpenAICompatible.tsx
Endpoint detection for Kimi/Moonshot added; memoized model lists and default model selection added; ModelPicker re-mounts when endpoint type changes.
Mistral Provider (streaming & FIM)
src/api/providers/mistral.ts, src/api/providers/__tests__/mistral-fim.spec.ts
Replaced manual fetch/SSE streaming with SDK-based streaming (fim.stream/chat.stream); updated payload shape, streaming iteration, content extraction (including thinking chunks and tool_calls), usage field mapping, error wrapping, and tests updated to mock SDK streams.
Tool Use / History Preservation
src/shared/tools.ts, src/core/assistant-message/NativeToolCallParser.ts, src/core/task/Task.ts
Added optional rawInput to ToolUse to preserve original API argument shapes; NativeToolCallParser populates rawInput; Task assembly prefers rawInput for history and uses originalName for alias-preserving history entries.
Tool Call Tests
src/core/assistant-message/__tests__/NativeToolCallParser.spec.ts
Added tests asserting rawInput preserves original line_ranges tuple format while nativeArgs uses converted objects.

Sequence Diagram(s)

mermaid
sequenceDiagram
participant Task as Task / Caller
participant Provider as MistralProvider
participant SDK as Mistral SDK
participant Consumer as Stream Consumer
participant Telemetry as TelemetryService

Task->>Provider: request FIM stream (model, prompt, suffix, tools...)
Provider->>SDK: fim.stream({ model, prompt, suffix, stream: true, tools })
SDK-->>Provider: async chunks/events (text, thinking[], tool_calls, usage)
Provider->>Provider: normalize content (strings, thinking chunks), handle tool_calls, map usage fields
Provider->>Consumer: emit text/chunk events (including reasoning/tool call events)
SDK-->>Provider: stream end / usage summary
Provider->>Task: final result + mapped usage (promptTokens, completionTokens, totalTokens)
alt error
SDK-->>Provider: error
Provider->>Telemetry: captureException(error)
Provider->>Task: throw ApiProviderError("Mistral FIM completion error: ...")
end

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Poem

🐰✨ I hopped through code with nimble paws,
Preserved your inputs without a pause.
Models match flavors—dash, space, or case,
Kimi joins the list, streaming keeps pace.
History snug, tooling saved in place.

🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Merge Conflict Detection ⚠️ Warning ❌ Merge conflicts detected (10 files):

⚔️ .devcontainer/devcontainer.json (content)
⚔️ .github/dependabot.yml (content)
⚔️ src/api/providers/__tests__/mistral-fim.spec.ts (content)
⚔️ src/api/providers/mistral.ts (content)
⚔️ src/core/assistant-message/NativeToolCallParser.ts (content)
⚔️ src/core/assistant-message/__tests__/NativeToolCallParser.spec.ts (content)
⚔️ src/core/task/Task.ts (content)
⚔️ src/shared/tools.ts (content)
⚔️ webview-ui/src/components/settings/ModelPicker.tsx (content)
⚔️ webview-ui/src/components/settings/providers/OpenAICompatible.tsx (content)

These conflicts must be resolved before merging into main.
Resolve conflicts locally and push changes to this branch.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Integration test: PRs #5370 + #5660 + #5704 merged together' clearly summarizes the main purpose of the PR—combining three upstream PRs to test for conflicts.
Description check ✅ Passed The description is well-structured and covers the key aspects: context (why testing three PRs together), implementation details (merge conflicts resolved), test results, and conclusions. While not using the exact template sections, it effectively communicates all essential information.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch review/combined-batch-1

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions
Copy link
Copy Markdown

Failed to generate code suggestions for PR

Copy link
Copy Markdown

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request combines three separate improvements: preserving original tool call arguments for better history consistency, refactoring the Mistral provider to use the official SDK, and enhancing the model picker with case-insensitive search and Kimi/Moonshot endpoint detection. The changes are well-implemented and improve code quality and user experience. I have one suggestion to make the endpoint detection logic more robust.

Comment on lines +53 to +57
const isKimiEndpoint =
apiConfiguration.openAiBaseUrl?.toLowerCase().includes("kimi") ||
apiConfiguration.openAiBaseUrl?.toLowerCase().includes("moonshot") ||
apiConfiguration.openAiBaseUrl?.toLowerCase().includes("api.moonshot.ai") ||
apiConfiguration.openAiBaseUrl?.toLowerCase().includes("api.moonshot.cn")
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current logic for detecting a Kimi/Moonshot endpoint relies on simple string inclusion, which could lead to false positives if a user's custom proxy URL happens to contain "kimi" or "moonshot". A more robust approach would be to parse the URL and check the hostname. This would make the detection more accurate and prevent incorrect model lists from being shown.

Wrapping this logic in useMemo would also prevent re-calculating it on every render.

Suggested change
const isKimiEndpoint =
apiConfiguration.openAiBaseUrl?.toLowerCase().includes("kimi") ||
apiConfiguration.openAiBaseUrl?.toLowerCase().includes("moonshot") ||
apiConfiguration.openAiBaseUrl?.toLowerCase().includes("api.moonshot.ai") ||
apiConfiguration.openAiBaseUrl?.toLowerCase().includes("api.moonshot.cn")
const isKimiEndpoint = useMemo(() => {
const baseUrl = apiConfiguration.openAiBaseUrl?.toLowerCase();
if (!baseUrl) return false;
try {
// Ensure there's a protocol for URL parsing, default to https
const url = new URL(baseUrl.startsWith("http") ? baseUrl : `https://${baseUrl}`);
const hostname = url.hostname;
// Check for official Kimi/Moonshot domains
return hostname.endsWith("moonshot.ai") || hostname.endsWith("moonshot.cn") || hostname.includes("kimi");
} catch (e) {
// Fallback to simple string matching for invalid URLs or during input
return baseUrl.includes("kimi") || baseUrl.includes("moonshot");
}
}, [apiConfiguration.openAiBaseUrl]);

Tests now mock this.client.fim.stream() (Mistral SDK) instead of
global.fetch + streamSse, matching the PR Kilo-Org#5660 refactor.

- yields chunks correctly: mocks SDK async iterable
- handles errors correctly: mocks SDK rejection + TelemetryService
- uses correct endpoint: verifies Mistral constructor args
- uses custom codestral URL: verifies constructor with custom URL

All 8 tests pass (4 fimSupport + 4 streamFim).
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Fix all issues with AI agents
In `@webview-ui/src/components/settings/providers/OpenAICompatible.tsx`:
- Around line 59-65: The spread order in the modelsToUse computed value causes
generic fetched model entries (openAiModels which include
openAiModelInfoSaneDefaults) to overwrite curated moonshotModels for known Kimi
IDs; update the useMemo logic in modelsToUse so that moonshotModels are spread
after fetchedModels (i.e., { ...fetchedModels, ...moonshotModels }) when
isKimiEndpoint is true, ensuring curated moonshotModels take precedence while
still including any additional models returned by openAiModels.
🧹 Nitpick comments (1)
webview-ui/src/components/settings/providers/OpenAICompatible.tsx (1)

53-57: Redundant URL substring checks.

Lines 56–57 (.includes("api.moonshot.ai") / .includes("api.moonshot.cn")) are already covered by the .includes("moonshot") check on line 55. They can be removed without changing behavior.

♻️ Suggested simplification
 	const isKimiEndpoint =
 		apiConfiguration.openAiBaseUrl?.toLowerCase().includes("kimi") ||
-		apiConfiguration.openAiBaseUrl?.toLowerCase().includes("moonshot") ||
-		apiConfiguration.openAiBaseUrl?.toLowerCase().includes("api.moonshot.ai") ||
-		apiConfiguration.openAiBaseUrl?.toLowerCase().includes("api.moonshot.cn")
+		apiConfiguration.openAiBaseUrl?.toLowerCase().includes("moonshot")

Comment on lines +59 to +65
const modelsToUse = useMemo(() => {
const fetchedModels = openAiModels ?? {}
if (isKimiEndpoint) {
return { ...moonshotModels, ...fetchedModels }
}
return fetchedModels
}, [isKimiEndpoint, openAiModels])
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Spread order causes curated Kimi model info to be overwritten by generic defaults.

When a Kimi endpoint returns model IDs that match entries in moonshotModels (e.g. "kimi-k2-thinking"), the fetched models—which all carry openAiModelInfoSaneDefaults (see line 132)—will overwrite the curated moonshot entries that contain accurate pricing, context windows, and capability flags.

Reverse the spread so curated data takes precedence over generic defaults:

🐛 Proposed fix
 	const modelsToUse = useMemo(() => {
 		const fetchedModels = openAiModels ?? {}
 		if (isKimiEndpoint) {
-			return { ...moonshotModels, ...fetchedModels }
+			return { ...fetchedModels, ...moonshotModels }
 		}
 		return fetchedModels
 	}, [isKimiEndpoint, openAiModels])

If the intent is to also surface any additional models returned by the API that aren't in moonshotModels, this ordering achieves that while still preserving the curated metadata for known models.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const modelsToUse = useMemo(() => {
const fetchedModels = openAiModels ?? {}
if (isKimiEndpoint) {
return { ...moonshotModels, ...fetchedModels }
}
return fetchedModels
}, [isKimiEndpoint, openAiModels])
const modelsToUse = useMemo(() => {
const fetchedModels = openAiModels ?? {}
if (isKimiEndpoint) {
return { ...fetchedModels, ...moonshotModels }
}
return fetchedModels
}, [isKimiEndpoint, openAiModels])
🤖 Prompt for AI Agents
In `@webview-ui/src/components/settings/providers/OpenAICompatible.tsx` around
lines 59 - 65, The spread order in the modelsToUse computed value causes generic
fetched model entries (openAiModels which include openAiModelInfoSaneDefaults)
to overwrite curated moonshotModels for known Kimi IDs; update the useMemo logic
in modelsToUse so that moonshotModels are spread after fetchedModels (i.e., {
...fetchedModels, ...moonshotModels }) when isKimiEndpoint is true, ensuring
curated moonshotModels take precedence while still including any additional
models returned by openAiModels.

@jeremylongshore
Copy link
Copy Markdown
Owner Author

Test Fix: mistral-fim.spec.ts

Updated tests to mock this.client.fim.stream() (Mistral SDK) instead of global.fetch + streamSse.

What changed

  • Mock @mistralai/mistralai module → control client.fim.stream() return value
  • Mock @roo-code/telemetry → prevent TelemetryService not initialized errors
  • Removed streamSse import (no longer used by the code)
  • Endpoint tests now verify Mistral constructor args instead of fetch() call args

Results (combined branch: PRs Kilo-Org#5370 + Kilo-Org#5660 + Kilo-Org#5704 + test fix)

Test Result Details
check-types PASS 22/22 packages
lint PASS 18/18 packages
unit tests PASS 7,936 passed, 84 skipped, 0 failures
mistral-fim.spec.ts PASS 8/8 tests (4 fimSupport + 4 streamFim)

Commit: 4ee3d6788e

process.cwd() + "packages/agent-runtime/..." doubled the package dir
when vitest runs from the package root. Use __dirname instead.

Pre-existing bug, not from any reviewed PR.
@qodo-code-review
Copy link
Copy Markdown

Code Review by Qodo

🐞 Bugs (2) 📘 Rule violations (3) 📎 Requirement gaps (0)

Grey Divider


Action required

1. Unsafe data.choices[0] access 📘 Rule violation ⛯ Reliability
Description
streamFim now indexes data.choices[0] without guarding when choices is missing, which can
throw at runtime and break streaming. This violates the requirement to explicitly handle null/empty
edge cases.
Code

src/api/providers/mistral.ts[R282-285]

+			const data = ev.data
+
+			const content = data.choices[0]?.delta.content
+			if (typeof content === "string") {
Evidence
PR Compliance ID 3 requires explicit handling of null/empty edge cases; the new code directly
indexes data.choices[0], which will throw if choices is undefined or null.

Rule 3: Generic: Robust Error Handling and Edge Case Management
src/api/providers/mistral.ts[282-285]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
`streamFim` can throw when `ev.data.choices` is missing because it uses `data.choices[0]` without optional chaining on `choices`.

## Issue Context
The previous implementation used safe optional chaining (`data.choices?.[0]?.delta?.content`). The new SDK event payload may omit fields in some chunks or error cases.

## Fix Focus Areas
- src/api/providers/mistral.ts[282-285]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


2. Mistral FIM tests stale 🐞 Bug ⛯ Reliability
Description
mistral-fim.spec.ts still mocks/ asserts the old fetch+SSE behavior, but streamFim now uses the
Mistral SDK (client.fim.stream). This mismatch will keep CI failing until tests are updated to
mock the SDK layer.
Code

src/api/providers/mistral.ts[R271-279]

+		let response
+		try {
+			response = await this.client.fim.stream(request)
+		} catch (error) {
+			const errorMessage = error instanceof Error ? error.message : String(error)
+			const apiError = new ApiProviderError(errorMessage, this.providerName, model, "streamFim")
+			TelemetryService.instance.captureException(apiError)
+			throw new Error(`Mistral FIM completion error: ${errorMessage}`)
		}
Evidence
The implementation now calls this.client.fim.stream(request) and does not call fetch or
streamSse. However, the test suite sets up global.fetch and mocks streamSse, and asserts
fetch was called with a codestral URL/headers and that streamSse consumed the Response
object—assertions that can no longer be true after the SDK migration.

src/api/providers/mistral.ts[250-279]
src/api/providers/tests/mistral-fim.spec.ts[7-122]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
Unit tests in `mistral-fim.spec.ts` are written for the previous implementation (manual `fetch` + `streamSse`). The provider now uses the Mistral SDK’s `client.fim.stream`, so these tests will fail.

### Issue Context
`mistral.spec.ts` already demonstrates the pattern used in this repo for mocking the Mistral SDK (`@mistralai/mistralai`) by returning an async-iterable stream.

### Fix Focus Areas
- src/api/providers/__tests__/mistral-fim.spec.ts[7-191]
- src/api/providers/__tests__/mistral.spec.ts[11-48]
- src/api/providers/mistral.ts[250-305]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools



Remediation recommended

3. Thrown error leaks details 📘 Rule violation ⛨ Security
Description
The thrown error includes the raw SDK error message, which may surface internal details to the user
if propagated to UI. Secure error handling requires generic user-facing errors with details only in
internal logs/telemetry.
Code

src/api/providers/mistral.ts[R274-278]

+		} catch (error) {
+			const errorMessage = error instanceof Error ? error.message : String(error)
+			const apiError = new ApiProviderError(errorMessage, this.providerName, model, "streamFim")
+			TelemetryService.instance.captureException(apiError)
+			throw new Error(`Mistral FIM completion error: ${errorMessage}`)
Evidence
PR Compliance ID 4 prohibits exposing internal system details in user-facing errors; the code throws
a new Error containing the raw errorMessage string.

Rule 4: Generic: Secure Error Handling
src/api/providers/mistral.ts[274-278]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A thrown error includes raw SDK error text, which can leak internal details if displayed to end users.

## Issue Context
Detailed errors are already captured via `TelemetryService.instance.captureException(apiError)`.

## Fix Focus Areas
- src/api/providers/mistral.ts[274-278]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


4. kilocode_change marker missing 📘 Rule violation ✓ Correctness
Description
Conversation history input selection now prefers rawInput, but the changed region in src/ is not
annotated with kilocode_change. This violates the annotation requirement for upstream-shared code
modifications.
Code

src/core/task/Task.ts[R3784-3788]

+								// Use rawInput to preserve original API format for history consistency.
+								// This ensures parameters like line_ranges stay as [[1, 50]] instead of
+								// being converted to lineRanges with object format [{ start: 1, end: 50 }].
+								// Fall back to nativeArgs for tools that don't have rawInput, then to params for legacy.
+								const input = toolUse.rawInput || toolUse.nativeArgs || toolUse.params
Evidence
PR Compliance ID 7 requires kilocode_change markers for upstream-shared core edits; the new
fallback chain uses rawInput without the required marker.

AGENTS.md
src/core/task/Task.ts[3784-3788]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

## Issue description
A behavior change in `Task` history input selection is unmarked with `kilocode_change`.

## Issue Context
The change prioritizes `toolUse.rawInput` for API history consistency.

## Fix Focus Areas
- src/core/task/Task.ts[3784-3788]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


5. Custom model option misfires 🐞 Bug ✓ Correctness
Description
ModelPicker’s filtering is now case-insensitive and normalizes dashes/spaces, but the “Use custom
model” option still uses an exact modelIds.includes(searchValue) check, so it can appear even when
the search matches an existing model under normalization.
Code

webview-ui/src/components/settings/ModelPicker.tsx[R102-115]

+	// kilocode_change: Case-insensitive search with dash/space normalization
+	const normalizeForSearch = (str: string) => str.toLowerCase().replace(/[-_\s]/g, "")
+
+	const filteredPreferredIds = useMemo(() => {
+		if (!searchValue.trim()) return preferredModelIds
+		const searchNormalized = normalizeForSearch(searchValue)
+		return preferredModelIds.filter((id) => normalizeForSearch(id).includes(searchNormalized))
+	}, [preferredModelIds, searchValue])
+
+	const filteredRestIds = useMemo(() => {
+		if (!searchValue.trim()) return restModelIds
+		const searchNormalized = normalizeForSearch(searchValue)
+		return restModelIds.filter((id) => normalizeForSearch(id).includes(searchNormalized))
+	}, [restModelIds, searchValue])
Evidence
Filtering uses normalizeForSearch(id).includes(normalizeForSearch(searchValue)), so entering a
casing/spacing variant can produce matches. Yet the custom option guard only checks exact membership
of the raw searchValue string in modelIds, so it will still offer “Use custom model” for values
that already match an existing model ID after normalization.

webview-ui/src/components/settings/ModelPicker.tsx[96-115]
webview-ui/src/components/settings/ModelPicker.tsx[276-280]

Agent prompt
The issue below was found during a code review. Follow the provided context and guidance below and implement a solution

### Issue description
With normalized, case-insensitive filtering enabled, the UI can still show a “Use custom model” option for search values that correspond to an existing model (differing only by case/dashes/spaces), enabling accidental selection of an invalid model id variant.

### Issue Context
Filtering is performed on normalized strings, but the custom-model guard uses exact string equality against `modelIds`.

### Fix Focus Areas
- webview-ui/src/components/settings/ModelPicker.tsx[102-115]
- webview-ui/src/components/settings/ModelPicker.tsx[276-280]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools


Grey Divider

ⓘ The new review experience is currently in Beta. Learn more

Grey Divider

Qodo Logo

Comment on lines +282 to +285
const data = ev.data

const content = data.choices[0]?.delta.content
if (typeof content === "string") {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

1. Unsafe data.choices[0] access 📘 Rule violation ⛯ Reliability

streamFim now indexes data.choices[0] without guarding when choices is missing, which can
throw at runtime and break streaming. This violates the requirement to explicitly handle null/empty
edge cases.
Agent Prompt
## Issue description
`streamFim` can throw when `ev.data.choices` is missing because it uses `data.choices[0]` without optional chaining on `choices`.

## Issue Context
The previous implementation used safe optional chaining (`data.choices?.[0]?.delta?.content`). The new SDK event payload may omit fields in some chunks or error cases.

## Fix Focus Areas
- src/api/providers/mistral.ts[282-285]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

Comment on lines +271 to 279
let response
try {
response = await this.client.fim.stream(request)
} catch (error) {
const errorMessage = error instanceof Error ? error.message : String(error)
const apiError = new ApiProviderError(errorMessage, this.providerName, model, "streamFim")
TelemetryService.instance.captureException(apiError)
throw new Error(`Mistral FIM completion error: ${errorMessage}`)
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Action required

2. Mistral fim tests stale 🐞 Bug ⛯ Reliability

mistral-fim.spec.ts still mocks/ asserts the old fetch+SSE behavior, but streamFim now uses the
Mistral SDK (client.fim.stream). This mismatch will keep CI failing until tests are updated to
mock the SDK layer.
Agent Prompt
### Issue description
Unit tests in `mistral-fim.spec.ts` are written for the previous implementation (manual `fetch` + `streamSse`). The provider now uses the Mistral SDK’s `client.fim.stream`, so these tests will fail.

### Issue Context
`mistral.spec.ts` already demonstrates the pattern used in this repo for mocking the Mistral SDK (`@mistralai/mistralai`) by returning an async-iterable stream.

### Fix Focus Areas
- src/api/providers/__tests__/mistral-fim.spec.ts[7-191]
- src/api/providers/__tests__/mistral.spec.ts[11-48]
- src/api/providers/mistral.ts[250-305]

ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools

@jeremylongshore jeremylongshore merged commit 3825a4e into main Feb 15, 2026
15 checks passed
@jeremylongshore jeremylongshore deleted the review/combined-batch-1 branch February 15, 2026 20:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants