Cherry pick Kimi AI SDK changes from upstream#5662
Conversation
🦋 Changeset detectedLatest commit: 228745b The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
| input: unknown | ||
| }> = [] | ||
|
|
||
| for (const part of message.content) { |
There was a problem hiding this comment.
WARNING: Assistant content ordering is not preserved when converting tool calls
In convertToAiSdkMessages(), assistant text parts are accumulated separately and then emitted before all tool-call parts. This can change semantics vs. the original Anthropic message where text/tool_use blocks may be interleaved (e.g., text → tool call → more text).
Consider building the AI SDK content array in a single pass over message.content to preserve the original ordering.
| model: languageModel, | ||
| prompt, | ||
| maxOutputTokens: this.getMaxOutputTokens(), | ||
| temperature: this.config.temperature ?? 0, |
There was a problem hiding this comment.
SUGGESTION: completePrompt() ignores model-derived temperature
createMessage() uses model.temperature ?? this.config.temperature ?? 0, but completePrompt() uses only this.config.temperature ?? 0. If getModel() supplies a non-zero default temperature, completePrompt() will behave differently than streaming.
Consider using the same precedence as createMessage() (model-derived temperature first) for consistency.
Code Review SummaryStatus: 8 Issues Found | Recommendation: Address before merge Overview
Fix these issues in Kilo Cloud Issue Details (click to expand)WARNING
SUGGESTION
Files Reviewed (12 files)
|
2d5b94b to
34cf829
Compare
1d76550 to
228745b
Compare
| @@ -1,23 +1,29 @@ | |||
| import OpenAI from "openai" | |||
| import { moonshotModels, moonshotDefaultModelId, type ModelInfo } from "@roo-code/types" | |||
There was a problem hiding this comment.
WARNING: Unused ModelInfo type import may fail lint
ModelInfo is imported but never referenced in this file. If ESLint/TS settings enforce no-unused-vars/imports, this can break CI.
| import { moonshotModels, moonshotDefaultModelId, type ModelInfo } from "@roo-code/types" | |
| import { moonshotModels, moonshotDefaultModelId } from "@roo-code/types" |
| */ | ||
|
|
||
| import { Anthropic } from "@anthropic-ai/sdk" | ||
| import OpenAI from "openai" |
There was a problem hiding this comment.
SUGGESTION: Make the openai import type-only
OpenAI appears to be used only for the tool_choice type in OpenAICompatibleHandler.mapToolChoice(). Importing it as a value can add an unnecessary runtime dependency in the bundled output.
| import OpenAI from "openai" | |
| import type OpenAI from "openai" |
| */ | ||
|
|
||
| import { Anthropic } from "@anthropic-ai/sdk" | ||
| import OpenAI from "openai" |
There was a problem hiding this comment.
SUGGESTION: Make the openai import type-only
OpenAI is only referenced in type positions (tool definitions). Using a type-only import avoids an unnecessary runtime require/bundle edge case.
| import OpenAI from "openai" | |
| import type OpenAI from "openai" |
| // UserContent only supports: string | Array<TextPart | ImagePart | FilePart> | ||
| // ToolContent (for role: "tool") supports: Array<ToolResultPart | ToolApprovalResponse> | ||
| if (toolResults.length > 0) { | ||
| modelMessages.push({ |
There was a problem hiding this comment.
WARNING: User content ordering can change when tool results are emitted
In convertToAiSdkMessages(), user tool_result parts are collected separately and then emitted as a role: "tool" message before the role: "user" text/image parts. If the original message.content interleaves text/images with tool results, this reorders content and can change semantics.
Consider building output messages in a single pass that preserves the original order (or explicitly document/guarantee the expected ordering invariants from upstream).
| /** Optional custom headers */ | ||
| headers?: Record<string, string> | ||
| /** Whether to include max_tokens in requests (default: false uses max_completion_tokens) */ | ||
| useMaxTokens?: boolean |
There was a problem hiding this comment.
SUGGESTION: useMaxTokens config is currently unused/misleading
OpenAICompatibleConfig.useMaxTokens is documented as controlling max_tokens vs max_completion_tokens, but the handler always sends AI SDK maxOutputTokens and never reads useMaxTokens.
Either implement this flag (if needed for provider quirks) or remove/rename it to avoid confusion.
| "@types/react": "^18.3.23", | ||
| "@types/react-dom": "^18.3.5" | ||
| "@types/react-dom": "^18.3.5", | ||
| "zod": "3.25.76", |
There was a problem hiding this comment.
WARNING: Mixed Zod majors via pnpm overrides can cause subtle runtime/type mismatches
This pins zod to 3.25.76 globally while also forcing @sap-ai-sdk/prompt-registry to use zod@^4. That will install two Zod majors in the repo, which can lead to issues like failed instanceof ZodError checks and schema/value incompatibilities across package boundaries.
If possible, consider aligning on a single Zod major (or ensure any cross-boundary error handling avoids instanceof/direct class identity checks).
|
not working -> i can help you by sharing moonshot - kimi code API key for debug @kevinvandijk |
|
Confirming: using Moonshot as the API provider with their Kimi Code Plan and this fixes error 400 but completely breaks tool calls. Extension version: 5.4.0 MODEL_NO_TOOLS_USED [ERROR] You did not use a tool in your previous response!... |
|
created new issue for it #5719 as this one seems to be closed |

No description provided.