Skip to content

feat(anthropic): spread message-level providerOptions.anthropic onto assistant messages#12760

Open
llc1123 wants to merge 1 commit intovercel:mainfrom
llc1123:feat/anthropic-assistant-message-provider-options
Open

feat(anthropic): spread message-level providerOptions.anthropic onto assistant messages#12760
llc1123 wants to merge 1 commit intovercel:mainfrom
llc1123:feat/anthropic-assistant-message-provider-options

Conversation

@llc1123
Copy link

@llc1123 llc1123 commented Feb 22, 2026

Background

When non-Claude models (e.g. kimi-k2.5) are used via @ai-sdk/anthropic, there is no way to pass custom fields like reasoning_content on assistant messages through to the HTTP body. The openai-compatible provider already supports this by spreading providerOptions.openaiCompatible onto messages, but the Anthropic provider only reads providerOptions.anthropic at the part level (for signatures, cache control, etc.), never at the message level.

Summary

Collect message-level providerOptions.anthropic (excluding cacheControl / cache_control) and spread it onto the serialized assistant message object. This mirrors how the openai-compatible provider handles providerOptions.openaiCompatible.

Changes:

  • convert-to-anthropic-messages-prompt.ts: After building the assistant content array, collect and spread extra fields from providerOptions.anthropic onto the message
  • Added 2 test cases covering the spread behavior and cache control key exclusion

Manual Verification

  • Built @ai-sdk/anthropic successfully
  • Tested with kimi-k2.5 via Anthropic SDK: reasoning_content now flows through to the HTTP body on assistant messages

Checklist

  • Tests have been added / updated (for bug fixes / features)
  • Documentation has been added / updated (for bug fixes / features)
  • A patch changeset for relevant packages has been added (for bug fixes / features - run pnpm changeset in the project root)
  • I have reviewed this pull request (self-review)

Related Issues

Downstream consumer: anomalyco/opencode#14641

Copilot AI review requested due to automatic review settings February 22, 2026 05:44
@tigent tigent bot added ai/provider related to a provider package. Must be assigned together with at least one `provider/*` label feature New feature or request provider/anthropic Issues related to the @ai-sdk/anthropic provider labels Feb 22, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR adds support for spreading message-level providerOptions.anthropic onto assistant messages in the Anthropic provider, mirroring the behavior already present in the openai-compatible provider. This enables custom fields like reasoning_content to pass through to the HTTP body when using non-Claude models (e.g., kimi-k2.5) via the Anthropic SDK.

Changes:

  • Added logic to collect message-level providerOptions.anthropic (excluding cache control keys) and spread them onto serialized assistant messages
  • Added test cases to verify the spread behavior and cache control key exclusion
  • Added a patch changeset documenting the feature

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated no comments.

File Description
packages/anthropic/src/convert-to-anthropic-messages-prompt.ts Implements the core feature by iterating through assistant messages in a block, collecting their provider options, and spreading them onto the final message object
packages/anthropic/src/convert-to-anthropic-messages-prompt.test.ts Adds two test cases: one verifying that custom fields are spread correctly, and another verifying that cache control keys are excluded
.changeset/spread-anthropic-provider-options.md Documents the feature as a patch-level change
Comments suppressed due to low confidence (3)

packages/anthropic/src/convert-to-anthropic-messages-prompt.ts:1007

  • When multiple assistant messages are combined into a single block, the current implementation iterates through all messages and collects their provider options. If multiple messages have different values for the same key, later values will silently overwrite earlier ones. This could lead to unexpected behavior where provider options from earlier messages in the block are lost. Consider either:
  1. Only applying provider options from the first message (to match a "first wins" strategy)
  2. Only applying provider options from the last message (to match a "last wins" strategy)
  3. Warning when there are conflicting keys across multiple messages
  4. Merging array values or throwing an error on conflicts

The current behavior is implicit "last wins" but this should be either documented or made more explicit.

        const extra: Record<string, unknown> = {};
        for (const message of block.messages) {
          const opts = message.providerOptions?.anthropic;
          if (opts != null && typeof opts === 'object') {
            for (const [k, v] of Object.entries(opts)) {
              if (k === 'cacheControl' || k === 'cache_control') continue;
              extra[k] = v;
            }
          }
        }

packages/anthropic/src/convert-to-anthropic-messages-prompt.test.ts:2014

  • Consider adding a test case for the scenario where multiple sequential assistant messages each have different providerOptions.anthropic values. This would document the expected behavior when messages are combined into a single assistant message (currently "last wins" due to object spread). For example, test what happens when two assistant messages both specify reasoning_content with different values.
  });

packages/anthropic/src/convert-to-anthropic-messages-prompt.ts:1001

  • The type guard typeof opts === 'object' will also match arrays in JavaScript. If providerOptions.anthropic could potentially be an array (though unlikely based on the intended API design), this could cause unexpected behavior when iterating with Object.entries(). Consider adding && !Array.isArray(opts) to the condition for more robust type checking, or document that providerOptions.anthropic must be a plain object.
          if (opts != null && typeof opts === 'object') {

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@llc1123
Copy link
Author

llc1123 commented Feb 22, 2026

Thanks for the review! Addressing each point:

  1. Array.isArray guardproviderOptions.anthropic comes from AI SDK's SharedV3ProviderMetadata type, constructed internally by the SDK. It won't be an array.

  2. Object.hasOwnObject.entries() already only iterates own enumerable properties. It does not traverse the prototype chain, so Object.hasOwn is redundant here.

  3. Reserved key conflicts — The openai-compatible provider spreads providerOptions.openaiCompatible without reserved key checks either. This is developer-controlled input; if someone passes role or content, that's on them. Keeping behavior consistent with the existing pattern.

  4. satisfies Partial<...>satisfies Partial provides no meaningful constraint on an object literal with a spread. The current as cast is sufficient and consistent with patterns elsewhere in the codebase.

  5. Documentation — Agreed, can be added as a follow-up.

  6. Performance / break early — Assistant blocks typically contain 1-2 messages (the SDK merges consecutive assistant messages into one block). No meaningful optimization opportunity here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ai/provider related to a provider package. Must be assigned together with at least one `provider/*` label feature New feature or request provider/anthropic Issues related to the @ai-sdk/anthropic provider

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants