feat(anthropic): spread message-level providerOptions.anthropic onto assistant messages#12760
Conversation
There was a problem hiding this comment.
Pull request overview
This PR adds support for spreading message-level providerOptions.anthropic onto assistant messages in the Anthropic provider, mirroring the behavior already present in the openai-compatible provider. This enables custom fields like reasoning_content to pass through to the HTTP body when using non-Claude models (e.g., kimi-k2.5) via the Anthropic SDK.
Changes:
- Added logic to collect message-level
providerOptions.anthropic(excluding cache control keys) and spread them onto serialized assistant messages - Added test cases to verify the spread behavior and cache control key exclusion
- Added a patch changeset documenting the feature
Reviewed changes
Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.
| File | Description |
|---|---|
| packages/anthropic/src/convert-to-anthropic-messages-prompt.ts | Implements the core feature by iterating through assistant messages in a block, collecting their provider options, and spreading them onto the final message object |
| packages/anthropic/src/convert-to-anthropic-messages-prompt.test.ts | Adds two test cases: one verifying that custom fields are spread correctly, and another verifying that cache control keys are excluded |
| .changeset/spread-anthropic-provider-options.md | Documents the feature as a patch-level change |
Comments suppressed due to low confidence (3)
packages/anthropic/src/convert-to-anthropic-messages-prompt.ts:1007
- When multiple assistant messages are combined into a single block, the current implementation iterates through all messages and collects their provider options. If multiple messages have different values for the same key, later values will silently overwrite earlier ones. This could lead to unexpected behavior where provider options from earlier messages in the block are lost. Consider either:
- Only applying provider options from the first message (to match a "first wins" strategy)
- Only applying provider options from the last message (to match a "last wins" strategy)
- Warning when there are conflicting keys across multiple messages
- Merging array values or throwing an error on conflicts
The current behavior is implicit "last wins" but this should be either documented or made more explicit.
const extra: Record<string, unknown> = {};
for (const message of block.messages) {
const opts = message.providerOptions?.anthropic;
if (opts != null && typeof opts === 'object') {
for (const [k, v] of Object.entries(opts)) {
if (k === 'cacheControl' || k === 'cache_control') continue;
extra[k] = v;
}
}
}
packages/anthropic/src/convert-to-anthropic-messages-prompt.test.ts:2014
- Consider adding a test case for the scenario where multiple sequential assistant messages each have different providerOptions.anthropic values. This would document the expected behavior when messages are combined into a single assistant message (currently "last wins" due to object spread). For example, test what happens when two assistant messages both specify reasoning_content with different values.
});
packages/anthropic/src/convert-to-anthropic-messages-prompt.ts:1001
- The type guard
typeof opts === 'object'will also match arrays in JavaScript. If providerOptions.anthropic could potentially be an array (though unlikely based on the intended API design), this could cause unexpected behavior when iterating with Object.entries(). Consider adding&& !Array.isArray(opts)to the condition for more robust type checking, or document that providerOptions.anthropic must be a plain object.
if (opts != null && typeof opts === 'object') {
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…assistant messages
07d55af to
2fc85e3
Compare
|
Thanks for the review! Addressing each point:
|
Background
When non-Claude models (e.g. kimi-k2.5) are used via
@ai-sdk/anthropic, there is no way to pass custom fields likereasoning_contenton assistant messages through to the HTTP body. The openai-compatible provider already supports this by spreadingproviderOptions.openaiCompatibleonto messages, but the Anthropic provider only readsproviderOptions.anthropicat the part level (for signatures, cache control, etc.), never at the message level.Summary
Collect message-level
providerOptions.anthropic(excludingcacheControl/cache_control) and spread it onto the serialized assistant message object. This mirrors how the openai-compatible provider handlesproviderOptions.openaiCompatible.Changes:
convert-to-anthropic-messages-prompt.ts: After building the assistant content array, collect and spread extra fields fromproviderOptions.anthropiconto the messageManual Verification
@ai-sdk/anthropicsuccessfullyreasoning_contentnow flows through to the HTTP body on assistant messagesChecklist
pnpm changesetin the project root)Related Issues
Downstream consumer: anomalyco/opencode#14641