Skip to content

Conversation

@mikkokirvesoja
Copy link

Fixes #3694

Summary

Extends the LiteLLM adapter to support both reasoning_content (LiteLLM standard) and reasoning (used by some providers) field names for reasoning content extraction. This maximizes compatibility across the OpenAI-compatible ecosystem without breaking existing functionality.

Problem

The current implementation only checks for reasoning_content, which works for providers following the LiteLLM standard but fails to extract reasoning from some providers that use the reasoning field name instead.

Solution

Updated _extract_reasoning_value() to check for both field names:

  1. reasoning_content - LiteLLM standard (Microsoft Azure/Foundry, etc.)
  2. reasoning - Used by some providers (LM Studio)

The implementation prioritizes reasoning_content when both fields are present, maintaining backward compatibility with the LiteLLM standard.

Note: The downstream processing in _iter_reasoning_texts() (line 124) was already prepared to handle both field names, but it never received the data because _extract_reasoning_value() wasn't extracting it. This fix completes the missing extraction step, allowing the existing processing logic to work as intended.

Changes

Code Changes

  • src/google/adk/models/lite_llm.py
    • Updated _extract_reasoning_value() to check both reasoning_content and reasoning fields
    • Added comprehensive docstring explaining the dual-field support
    • Maintains backward compatibility - existing providers continue to work

Test Changes

  • tests/unittests/models/test_litellm.py
    • Added test_message_to_generate_content_response_reasoning_field()
    • Added test_model_response_to_generate_content_response_reasoning_field()
    • Added test_reasoning_content_takes_precedence_over_reasoning()
    • Added 9 comprehensive tests for _extract_reasoning_value() function:
      • Tests for both field names (attribute and dict access)
      • Precedence testing when both fields present
      • Edge cases (None, empty strings, missing fields)

Testing Plan

✅ Unit Tests

All tests pass (113 tests total in test_litellm.py):

$ .venv/bin/pytest tests/unittests/models/test_litellm.py -v
# 113 passed, 5 warnings (104 existing + 9 new)

Coverage:

  • reasoning_content field extraction (existing functionality)
  • reasoning field extraction (new functionality)
  • ✅ Precedence when both fields present
  • ✅ None/empty handling
  • ✅ Dict and object attribute access
  • ✅ No regression in existing tests

✅ Manual E2E Testing

Test Setup:

  • LM Studio running locally (http://localhost:1234)
  • Model: openai/gpt-oss-20b

Before Fix:

Non-streaming: Total thought parts: 0  ❌
Streaming: Total thought parts: 0      ❌

After Fix:

Non-streaming: Total thought parts: 1  ✅
  Thought part 1: "We need to answer with step-by-step reasoning..."
  
Streaming: Total thought parts: X      ✅
  Reasoning content successfully extracted from streaming chunks

Provider Compatibility

Provider Field Name Before After
LiteLLM Standard reasoning_content ✅ Works ✅ Works
Microsoft Azure/Foundry reasoning_content ✅ Works ✅ Works
vLLM reasoning ❌ Broken ✅ Fixed*
LM Studio reasoning ❌ Broken ✅ Fixed
Ollama (via LiteLLM) reasoning_content ✅ Works ✅ Works

* Not directly tested, but vLLM documentation confirms it uses the reasoning field

Backward Compatibility

Fully backward compatible

  • Existing providers using reasoning_content continue to work unchanged
  • No breaking changes to API or behavior
  • Prioritizes reasoning_content when both fields present (maintains LiteLLM standard)

Code Quality

  • ✅ All existing tests pass (no regressions)
  • ✅ New tests added for new functionality
  • ✅ Code formatted with isort and pyink
  • ✅ Follows Google Python Style Guide
  • ✅ Comprehensive docstrings

Checklist

  • Code changes implemented
  • Unit tests added and passing
  • Manual E2E testing completed
  • Code formatted with autoformat.sh
  • No regressions in existing tests
  • Backward compatible
  • Documentation updated (inline docstrings)
  • Ready for review

Fixes google#3694

Extends _extract_reasoning_value() to check for both 'reasoning_content'
(LiteLLM standard) and 'reasoning' (used by some providers) field names.
This maximizes compatibility across the OpenAI-compatible ecosystem.

The downstream processing in _iter_reasoning_texts() was already prepared
to handle both field names, but the extraction step was missing support
for the 'reasoning' field.

Changes:
- Updated _extract_reasoning_value() to check both field names
- Prioritizes reasoning_content when both fields are present
- Added 12 comprehensive unit tests

All 113 tests passing. Fully backward compatible.
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @mikkokirvesoja, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly improves the LiteLLM adapter's ability to extract reasoning information from model responses. By extending the _extract_reasoning_value function to recognize both the reasoning_content and reasoning fields, it ensures broader compatibility across the OpenAI-compatible ecosystem. This change allows for the correct processing of thought processes from providers that use the alternative reasoning field, thereby enriching the model's output interpretation without disrupting existing integrations.

Highlights

  • Enhanced LiteLLM Adapter Compatibility: The LiteLLM adapter now supports both "reasoning_content" (LiteLLM standard) and "reasoning" fields for extracting reasoning content, improving compatibility with various OpenAI-compatible providers like LM Studio and vLLM.
  • Prioritized Reasoning Field Extraction: The implementation prioritizes "reasoning_content" when both fields are present, ensuring backward compatibility and adherence to the LiteLLM standard.
  • Comprehensive Test Coverage: Extensive unit tests have been added for the _extract_reasoning_value function, covering attribute and dictionary access, precedence, edge cases (None, empty strings, missing fields), and ensuring no regressions.
  • Verified Functionality: Manual end-to-end testing with LM Studio confirmed successful reasoning content extraction for providers using the "reasoning" field.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added the models [Component] Issues related to model support label Nov 25, 2025
Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This is a great addition for improving compatibility with different OpenAI-compatible providers. The change is clear, and the extensive test suite is much appreciated.

I found a subtle bug in the dictionary-based extraction logic that could cause incorrect precedence when reasoning_content has a falsy value (like an empty string). I've left a suggestion to fix this and a corresponding update for one of the new tests.

Overall, excellent work on this!

Address gemini-code-assist review feedback:
- Use two-argument dict.get() to check key presence instead of truthiness
- This ensures reasoning_content takes precedence even with falsy values
- Update test to expect empty string instead of None for consistency

The previous 'or' operator would incorrectly fall back to 'reasoning'
when 'reasoning_content' was present but had a falsy value like ''.
Now using dict.get(key, default) properly maintains precedence.
@ryanaiagent ryanaiagent self-assigned this Nov 25, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

models [Component] Issues related to model support

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ADK LiteLlm adapter drops LiteLLM reasoning content

3 participants