-
Notifications
You must be signed in to change notification settings - Fork 2.5k
Support both reasoning_content and reasoning fields in LiteLLM adapter #3709
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Support both reasoning_content and reasoning fields in LiteLLM adapter #3709
Conversation
Fixes google#3694 Extends _extract_reasoning_value() to check for both 'reasoning_content' (LiteLLM standard) and 'reasoning' (used by some providers) field names. This maximizes compatibility across the OpenAI-compatible ecosystem. The downstream processing in _iter_reasoning_texts() was already prepared to handle both field names, but the extraction step was missing support for the 'reasoning' field. Changes: - Updated _extract_reasoning_value() to check both field names - Prioritizes reasoning_content when both fields are present - Added 12 comprehensive unit tests All 113 tests passing. Fully backward compatible.
Summary of ChangesHello @mikkokirvesoja, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the LiteLLM adapter's ability to extract reasoning information from model responses. By extending the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This is a great addition for improving compatibility with different OpenAI-compatible providers. The change is clear, and the extensive test suite is much appreciated.
I found a subtle bug in the dictionary-based extraction logic that could cause incorrect precedence when reasoning_content has a falsy value (like an empty string). I've left a suggestion to fix this and a corresponding update for one of the new tests.
Overall, excellent work on this!
Address gemini-code-assist review feedback: - Use two-argument dict.get() to check key presence instead of truthiness - This ensures reasoning_content takes precedence even with falsy values - Update test to expect empty string instead of None for consistency The previous 'or' operator would incorrectly fall back to 'reasoning' when 'reasoning_content' was present but had a falsy value like ''. Now using dict.get(key, default) properly maintains precedence.
Fixes #3694
Summary
Extends the LiteLLM adapter to support both
reasoning_content(LiteLLM standard) andreasoning(used by some providers) field names for reasoning content extraction. This maximizes compatibility across the OpenAI-compatible ecosystem without breaking existing functionality.Problem
The current implementation only checks for
reasoning_content, which works for providers following the LiteLLM standard but fails to extract reasoning from some providers that use thereasoningfield name instead.Solution
Updated
_extract_reasoning_value()to check for both field names:reasoning_content- LiteLLM standard (Microsoft Azure/Foundry, etc.)reasoning- Used by some providers (LM Studio)The implementation prioritizes
reasoning_contentwhen both fields are present, maintaining backward compatibility with the LiteLLM standard.Note: The downstream processing in
_iter_reasoning_texts()(line 124) was already prepared to handle both field names, but it never received the data because_extract_reasoning_value()wasn't extracting it. This fix completes the missing extraction step, allowing the existing processing logic to work as intended.Changes
Code Changes
src/google/adk/models/lite_llm.py_extract_reasoning_value()to check bothreasoning_contentandreasoningfieldsTest Changes
tests/unittests/models/test_litellm.pytest_message_to_generate_content_response_reasoning_field()test_model_response_to_generate_content_response_reasoning_field()test_reasoning_content_takes_precedence_over_reasoning()_extract_reasoning_value()function:Testing Plan
✅ Unit Tests
All tests pass (113 tests total in test_litellm.py):
$ .venv/bin/pytest tests/unittests/models/test_litellm.py -v # 113 passed, 5 warnings (104 existing + 9 new)Coverage:
reasoning_contentfield extraction (existing functionality)reasoningfield extraction (new functionality)✅ Manual E2E Testing
Test Setup:
http://localhost:1234)openai/gpt-oss-20bBefore Fix:
After Fix:
Provider Compatibility
reasoning_contentreasoning_contentreasoningreasoningreasoning_content* Not directly tested, but vLLM documentation confirms it uses the
reasoningfieldBackward Compatibility
✅ Fully backward compatible
reasoning_contentcontinue to work unchangedreasoning_contentwhen both fields present (maintains LiteLLM standard)Code Quality
isortandpyinkChecklist