Skip to content

Conversation

@KyleKreuter
Copy link

Fixes #5019

Summary

Mistral's Magistral reasoning models return content as an array of content blocks
instead of a plain string.

Changes

  • Update content() to extract the text block from thinking responses

Tested with

  • Model: magistral-medium-2509
  • Response format verified with direct API call

  Mistral's Magistral reasoning models return content as an array
  of content blocks instead of a plain string. Update content()
  method to extract the text block from thinking responses.

  Closes spring-projects#5019

Signed-off-by: Kyle Kreuter <[email protected]>
@nicolaskrier
Copy link
Contributor

nicolaskrier commented Dec 3, 2025

Hello @KyleKreuter, thanks for your contribution!

Here are a few suggestions to improve Magistral support:

  • Add a MistralAiAssistantMessage class (like ZhiPuAiAssistantMessage) to handle both content and reasoning fields.
  • Replace List/Map in MistralAiApi with dedicated classes: ThinkChunk, ReferenceChunk, and TextChunk (Mistral AI API documentation).
  • Support the reasoning prompt mode as described in the API documentation (not related to the current issue).
  • Add unit/integration tests to validate these features.

Happy to discuss or clarify if needed!

- Add MistralAiAssistantMessage class with thinkingContent field
- Add ContentChunk, TextChunk, ThinkChunk, ReferenceChunk types
- Update ChatCompletionMessage to parse nested thinking content
- Add unit and integration tests for Magistral models

Signed-off-by: Kyle Kreuter <[email protected]>
@KyleKreuter
Copy link
Author

Hi, thanks for the feedback!

I've implemented the suggested improvements:

  1. MistralAiAssistantMessage class
  • Created MistralAiAssistantMessage extending AssistantMessage with a thinkingContent field
  • Follows the same pattern as ZhiPuAiAssistantMessage and DeepSeekAssistantMessage
  • Includes Builder pattern, proper equals()/hashCode()/toString() implementations
  1. Dedicated chunk classes
  • Added sealed interface ContentChunk with three implementations:
    • TextChunk - for text content blocks
    • ThinkChunk - for thinking/reasoning content blocks
    • ReferenceChunk - for citation references
  • Updated ChatCompletionMessage with content(), thinkingContent(), and contentChunks() methods
  • Note: The Mistral API returns thinking content in a nested format ({"type": "thinking", "thinking": [{"type": "text", "text": "..."}]}), which is now properly parsed
  1. Unit/Integration tests
  • MistralAiAssistantMessageTests - 23 test cases for the message class
  • MistralAiContentParsingTests - 25 test cases for content parsing
  • MistralAiMagistralIT - 6 integration tests with real Magistral API calls

Please let me know if you have any further suggestions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

MistralAiApi fails to parse thinking content from Magistral reasoning models

2 participants