Skip to content

Fix: Exclude tool params for models without function calling support (#21125)#21244

Merged
krrishdholakia merged 2 commits intoBerriAI:mainfrom
AtharvaJaiswal005:fix/exclude-tool-params-no-function-calling-21125
Feb 16, 2026
Merged

Fix: Exclude tool params for models without function calling support (#21125)#21244
krrishdholakia merged 2 commits intoBerriAI:mainfrom
AtharvaJaiswal005:fix/exclude-tool-params-no-function-calling-21125

Conversation

@AtharvaJaiswal005
Copy link
Contributor

Summary

Fixes #21125

JSON-configured providers (like PublicAI/EuroLLM) inherit get_supported_openai_params from OpenAIGPTConfig, which always includes tool-related parameters (tools, tool_choice, function_call, functions, parallel_tool_calls). This causes issues when models on these providers don't actually support function calling — the params are incorrectly advertised as supported, leading to API errors.

Changes

  • litellm/llms/openai_like/dynamic_config.py: Override get_supported_openai_params in the dynamically generated JSONProviderConfig class to check supports_function_calling() and remove tool-related params when the model doesn't support it.
  • tests/test_litellm/llms/openai_like/test_json_providers.py: Added 2 regression tests — one verifying tool params are excluded when function calling is not supported, another verifying they're included when it is.

Approach

Follows the same pattern already used by OVHCloud (litellm/llms/ovhcloud/chat/transformation.py) and Fireworks AI (litellm/llms/fireworks_ai/chat/transformation.py) — both check supports_function_calling() before including tool params.

Test Plan

  • test_tool_params_excluded_when_function_calling_not_supported — mocks supports_function_callingFalse, asserts tool params removed
  • test_tool_params_included_when_function_calling_supported — mocks supports_function_callingTrue, asserts tool params present
  • All 8 existing JSON provider tests pass
  • No regressions

Future Suggestion

Currently, the supported params list in OpenAIGPTConfig is hardcoded — every time OpenAI adds or changes a parameter, the base class needs a manual update. A potential improvement could be to allow JSON-configured providers to define their own supported_params list directly in providers.json, making each provider's capabilities fully data-driven instead of relying on inheritance from a hardcoded base class. This would also allow per-provider param customization without needing code changes. Happy to help with this if it's something the team is interested in!

Note: Open to any feedback or changes!

…ling (BerriAI#21125)

JSON-configured providers (e.g. PublicAI) inherited all OpenAI params
including tools, tool_choice, function_call, and functions — even for
models that don't support function calling. This caused an inconsistency
where get_supported_openai_params included "tools" but
supports_function_calling returned False.

The fix checks supports_function_calling in the dynamic config's
get_supported_openai_params and removes tool-related params when the
model doesn't support it. Follows the same pattern used by OVHCloud
and Fireworks AI providers.
@vercel
Copy link

vercel bot commented Feb 15, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
litellm Ready Ready Preview, Comment Feb 15, 2026 9:25am

Request Review

@CLAassistant
Copy link

CLAassistant commented Feb 15, 2026

CLA assistant check
All committers have signed the CLA.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Feb 15, 2026

Greptile Summary

This PR fixes an issue where JSON-configured providers (e.g., PublicAI/EuroLLM) incorrectly advertised tool-related parameters as supported even when the underlying model doesn't support function calling, leading to API errors.

  • Overrides get_supported_openai_params in the dynamically generated JSONProviderConfig class to check supports_function_calling() and remove tool-related params (tools, tool_choice, function_call, functions, parallel_tool_calls) when the model doesn't support it
  • Follows the same pattern established by OVHCloud and Fireworks AI provider implementations
  • Adds two mock-only regression tests verifying the behavior in both directions (supported and unsupported)
  • Minor style note: the code uses inline imports inside the method body, which deviates from the project's preference for module-level imports per CLAUDE.md

Confidence Score: 4/5

  • This PR is safe to merge — it's a targeted fix that follows established patterns in the codebase with no regressions expected.
  • Score of 4 reflects: correct logic following the OVHCloud/Fireworks AI pattern, proper mock-only tests, no impact on existing behavior for models that support function calling. Minor deductions for inline imports (style) and a redundant try/except wrapper.
  • No files require special attention — litellm/llms/openai_like/dynamic_config.py has minor style concerns but no functional issues.

Important Files Changed

Filename Overview
litellm/llms/openai_like/dynamic_config.py Overrides get_supported_openai_params to check supports_function_calling() and strip tool-related params when unsupported. Follows established patterns (OVHCloud, Fireworks AI). Minor style issues: inline imports and redundant try/except.
tests/test_litellm/llms/openai_like/test_json_providers.py Adds two well-structured mock-only regression tests verifying tool params are excluded/included based on function calling support. Tests correctly use unittest.mock.patch without making real network calls.

Flowchart

flowchart TD
    A["get_supported_openai_params(model)"] --> B["super().get_supported_openai_params(model)"]
    B --> C["Call supports_function_calling(model, provider.slug)"]
    C --> D{supports FC?}
    D -->|Yes| E["Return full supported_params list"]
    D -->|No| F["Remove tool-related params:<br/>tools, tool_choice, function_call,<br/>functions, parallel_tool_calls"]
    F --> G["Log debug message"]
    G --> H["Return filtered supported_params"]
    C -->|Exception| I["Default _supports_fc = False"]
    I --> F
Loading

Last reviewed commit: 43cdcee

Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

2 files reviewed, 2 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +101 to +102
from litellm._logging import verbose_logger
from litellm.utils import supports_function_calling
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Inline imports should be at module level

Per CLAUDE.md, imports should be placed at the top of the file rather than inside methods. verbose_logger is already imported at the top of the sibling json_loader.py file, confirming it's safe at module level. The supports_function_calling import from litellm.utils is used inline in a few other provider files (bedrock, sambanova) — likely to avoid circular imports — so that one may be justified. Consider moving at least verbose_logger to the top-level imports.

Suggested change
from litellm._logging import verbose_logger
from litellm.utils import supports_function_calling
from litellm.utils import supports_function_calling

Context Used: Context from dashboard - CLAUDE.md (source)

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch! Moved verbose_logger to the top-level imports. Kept supports_function_calling as an inline import to avoid circular imports (same pattern used in other provider files like bedrock and sambanova). Fixed in 83f4d30.

Comment on lines +106 to +111
try:
_supports_fc = supports_function_calling(
model=model, custom_llm_provider=provider.slug
)
except Exception:
_supports_fc = False
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Redundant try/except — supports_function_calling already handles errors

The underlying _supports_factory function in litellm/utils.py already has a comprehensive try/except block that catches all exceptions and returns False when a model isn't found (lines 2526-2537 of utils.py). This outer try/except is therefore redundant. It's not harmful, but it obscures the fact that the utility function is designed to handle missing models gracefully.

If you want to keep defensive coding here for safety, that's fine — but be aware that the except Exception: _supports_fc = False path should never actually be reached under normal conditions.

Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right _supports_factory already catches all exceptions and returns False for missing models, so the outer try/except was unnecessary. Removed it. Fixed in 83f4d30.

…ry/except

Address review feedback from Greptile bot:
- Move verbose_logger import to top-level (matches project convention)
- Remove redundant try/except around supports_function_calling() since it
  already handles exceptions internally via _supports_factory()
@krrishdholakia krrishdholakia merged commit 7bcef14 into BerriAI:main Feb 16, 2026
11 of 18 checks passed
krrishdholakia added a commit that referenced this pull request Feb 16, 2026
* fix: SSO PKCE support fails in multi-pod Kubernetes deployments

* fix: virutal key grace period from env/UI

* fix: refactor, race condition handle, fstring sql injection

* fix: add async call to avoid server pauses

* Update tests/test_litellm/proxy/management_endpoints/test_ui_sso.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix: add await in tests

* add modify test to perform async run

* Update tests/test_litellm/proxy/management_endpoints/test_ui_sso.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update tests/test_litellm/proxy/management_endpoints/test_ui_sso.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix grace period with better error handling on frontend and as per best practices

* Update tests/test_litellm/proxy/management_endpoints/test_ui_sso.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix: as per request changes

* Update litellm/proxy/utils.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Fix errors when callbacks are invoked for file delete operations:

* Fix errors when callbacks are invoked for file operations

* Fix: pass deployment credentials to afile_retrieve in managed_files post-call hook

* Fix: bypass managed files access check in batch polling by calling afile_content directly

* Update tests/test_litellm/proxy/management_endpoints/test_ui_sso.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix: afile_retrieve returns unified ID for batch output files

* fix: batch retrieve returns unified input_file_id

* fix(chatgpt): drop unsupported responses params for Codex

Co-authored-by: Cursor <cursoragent@cursor.com>

* test(chatgpt): ensure Codex request filters unsupported params

Co-authored-by: Cursor <cursoragent@cursor.com>

* Fix deleted managed files returning 403 instead of 404

* Add comments

* Update litellm/proxy/utils.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix: thread deployment model_info through batch cost calculation

batch_cost_calculator only checked the global cost map, ignoring
deployment-level custom pricing (input_cost_per_token_batches etc.).
Add optional model_info param through the batch cost chain and pass
it from CheckBatchCost.

* fix(deps): add pytest-postgresql for db schema migration tests

The test_db_schema_migration.py test requires pytest-postgresql but it was
missing from dependencies, causing import errors:

  ModuleNotFoundError: No module named 'pytest_postgresql'

Added pytest-postgresql ^6.0.0 to dev dependencies to fix test collection
errors in proxy_unit_tests.

This is a pre-existing issue, not related to PR #21277.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* fix(test): replace caplog with custom handler for parallel execution

The cost calculation log level tests were failing when run with pytest-xdist
parallel execution because caplog doesn't work reliably across worker processes.
This causes "ValueError: I/O operation on closed file" errors.

Solution: Replace caplog fixture with a custom LogRecordHandler that directly
attaches to the logger. This approach works correctly in parallel execution
because each worker process has its own handler instance.

Fixes test failures in PR #21277 when running with --dist=loadscope.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* fix(test): correct async mock for video generation logging test

The test was failing with AuthenticationError because the mock wasn't
intercepting the actual HTTP handler calls. This caused real API calls
with no API key, resulting in 401 errors.

Root cause: The test was patching the wrong target using string path
'litellm.videos.main.base_llm_http_handler' instead of using patch.object
on the actual handler instance. Additionally, it was mocking the sync
method instead of async_video_generation_handler.

Solution: Use patch.object with side_effect pattern on the correct
async handler method, following the same pattern used in
test_video_generation_async().

Fixes test failure in PR #21277 when running with --dist=loadscope.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* fix(test): add cleanup fixture and no_parallel mark for MCP tests

Two MCP server tests were failing when run with pytest-xdist parallel
execution (--dist=loadscope):
- test_mcp_routing_with_conflicting_alias_and_group_name
- test_oauth2_headers_passed_to_mcp_client

Both tests showed assertion failures where mocks weren't being called
(0 times instead of expected 1 time).

Root cause: These tests rely on global_mcp_server_manager singleton
state and complex async mocking that doesn't work reliably with
parallel execution. Each worker process can have different state
and patches may not apply correctly.

Solution:
1. Added autouse fixture to clean up global_mcp_server_manager registry
   before and after each test for better isolation
2. Added @pytest.mark.no_parallel to these specific tests to ensure
   they run sequentially, avoiding parallel execution issues

This approach maintains test reliability while allowing other tests
in the file to still benefit from parallelization.

Fixes test failures exposed by PR #21277.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Regenerate poetry.lock with Poetry 2.3.2

Updated lock file to use Poetry 2.3.2 (matching main branch standard).
This addresses Greptile feedback about Poetry version mismatch.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Remove unused pytest import and add trailing newline

- Removed unused pytest import (caplog fixture was removed)
- Added missing trailing newline at end of file

Addresses Greptile feedback (minor style issues).

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Remove redundant import inside test method

The module litellm.videos.main is already imported at the top of
the file (line 21), so the import inside the test method is redundant.

Addresses Greptile feedback (minor style issue).

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Fix converse anthropic usage object according to v1/messages specs

* Add routing based on if reasoning is supported or not

* add fireworks_ai/accounts/fireworks/models/kimi-k2p5 in model map

* Removed stray .md file

* fix(bedrock): clamp thinking.budget_tokens to minimum 1024

Bedrock rejects thinking.budget_tokens values below 1024 with a 400
error. This adds automatic clamping in the LiteLLM transformation
layer so callers (e.g. router with reasoning_effort="low") don't
need to know about the provider-specific minimum.

Fixes #21297

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: improve Langfuse test isolation to prevent flaky failures (#21093)

The test was creating fresh mocks but not fully isolating from setUp state,
causing intermittent CI failures with 'Expected generation to be called once.
Called 0 times.'

Instead of creating fresh mocks, properly reset the existing setUp mocks to
ensure clean state while maintaining proper mock chain configuration.

* feat(s3): add support for virtual-hosted-style URLs (#21094)

Add s3_use_virtual_hosted_style parameter to support AWS S3 virtual-hosted-style URL format (bucket.endpoint/key) alongside the existing path-style format (endpoint/bucket/key).

This enables compatibility with S3-compatible services like MinIO and aligns with AWS S3 official terminology.

* Addressed greptile comments to extract common helpers and return 404

* Allow effort="max" for Claude Opus 4.6 (#21112)

* fix(aiohttp): prevent closing shared ClientSession in AiohttpTransport (#21117)

When a shared ClientSession is passed to LiteLLMAiohttpTransport,
calling aclose() on the transport would close the shared session,
breaking other clients still using it.

Add owns_session parameter (default True for backwards compatibility)
to AiohttpTransport and LiteLLMAiohttpTransport. When a shared session
is provided in http_handler.py, owns_session=False is set to prevent
the transport from closing a session it does not own.

This aligns AiohttpTransport with the ownership pattern already used
in AiohttpHandler (aiohttp_handler.py).

* perf(spend): avoid duplicate daily agent transaction computation (#21187)

* fix: proxy/batches_endpoints/endpoints.py:309:11: PLR0915 Too many statements (54 > 50)

* fix mypy

* Add doc for OpenAI Agents SDK with LiteLLM

* Add doc for OpenAI Agents SDK with LiteLLM

* Update docs/my-website/sidebars.js

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix mypy

* Update tests/test_litellm/proxy/_experimental/mcp_server/test_mcp_server.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Add blog fffor Managing Anthropic Beta Headers

* Add blog fffor Managing Anthropic Beta Headers

* correct the time

* Fix: Exclude tool params for models without function calling support (#21125) (#21244)

* Fix tool params reported as supported for models without function calling (#21125)

JSON-configured providers (e.g. PublicAI) inherited all OpenAI params
including tools, tool_choice, function_call, and functions — even for
models that don't support function calling. This caused an inconsistency
where get_supported_openai_params included "tools" but
supports_function_calling returned False.

The fix checks supports_function_calling in the dynamic config's
get_supported_openai_params and removes tool-related params when the
model doesn't support it. Follows the same pattern used by OVHCloud
and Fireworks AI providers.

* Style: move verbose_logger to module-level import, remove redundant try/except

Address review feedback from Greptile bot:
- Move verbose_logger import to top-level (matches project convention)
- Remove redundant try/except around supports_function_calling() since it
  already handles exceptions internally via _supports_factory()

* fix(index.md): cleanup str

* fix(proxy): handle missing DATABASE_URL in append_query_params (#21239)

* fix: handle missing database url in append_query_params

* Update litellm/proxy/proxy_cli.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

---------

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix(mcp): revert StreamableHTTPSessionManager to stateless mode (#21323)

PR #19809 changed stateless=True to stateless=False to enable progress
notifications for MCP tool calls. This caused the mcp library to enforce
mcp-session-id headers on all non-initialize requests, breaking MCP
Inspector, curl, and any client without automatic session management.

Revert to stateless=True to restore compatibility with all MCP clients.
The progress notification code already handles missing sessions gracefully
(defensive checks + try/except), so no other changes are needed.

Fixes #20242

* UI - Content Filters, help edit/view categories and 1-click add categories + go to next page  (#21223)

* feat(ui/): allow viewing content filter categories on guardrail info

* fix(add_guardrail_form.tsx): add validation check to prevent adding empty content filter guardrails

* feat(ui/): improve ux around adding new content filter categories

easy to skip adding a category, so make it a 1-click thing

* Fix OCI Grok output pricing (#21329)

* fix(proxy): fix master key rotation Prisma validation errors

_rotate_master_key() used jsonify_object() which converts Python dicts
to JSON strings. Prisma's Python client rejects strings for Json-typed
fields — it requires prisma.Json() wrappers or native dicts.

This affected three code paths:
- Model table (create_many): litellm_params and model_info converted to
  strings, plus created_at/updated_at were None (non-nullable DateTime)
- Config table (update): param_value converted to string
- Credentials table (update): credential_values/credential_info
  converted to strings

Fix: replace jsonify_object() with model_dump(exclude_none=True) +
prisma.Json() wrappers for all Json fields. Wrap model delete+insert
in a Prisma transaction for atomicity. Add try/except around MCP
server rotation to prevent non-critical failures from blocking the
entire rotation.

---------

Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Ephrim Stanley <ephrim.stanley@point72.com>
Co-authored-by: Jay Prajapati <79649559+jayy-77@users.noreply.github.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Julio Quinteros Pro <jquinter@gmail.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: mjkam <mjkam@naver.com>
Co-authored-by: Fly <48186978+tuzkiyoung@users.noreply.github.com>
Co-authored-by: Kristoffer Arlind <13228507+KristofferArlind@users.noreply.github.com>
Co-authored-by: Constantine <Runixer@gmail.com>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
Co-authored-by: Atharva Jaiswal <92455570+AtharvaJaiswal005@users.noreply.github.com>
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Vincent Koc <vincentkoc@ieee.org>
Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com>
krrishdholakia added a commit that referenced this pull request Feb 17, 2026
* fix: SSO PKCE support fails in multi-pod Kubernetes deployments

* fix: virutal key grace period from env/UI

* fix: refactor, race condition handle, fstring sql injection

* fix: add async call to avoid server pauses

* Update tests/test_litellm/proxy/management_endpoints/test_ui_sso.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix: add await in tests

* add modify test to perform async run

* Update tests/test_litellm/proxy/management_endpoints/test_ui_sso.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Update tests/test_litellm/proxy/management_endpoints/test_ui_sso.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix grace period with better error handling on frontend and as per best practices

* Update tests/test_litellm/proxy/management_endpoints/test_ui_sso.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix: as per request changes

* Update litellm/proxy/utils.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Fix errors when callbacks are invoked for file delete operations:

* Fix errors when callbacks are invoked for file operations

* Fix: pass deployment credentials to afile_retrieve in managed_files post-call hook

* Fix: bypass managed files access check in batch polling by calling afile_content directly

* Update tests/test_litellm/proxy/management_endpoints/test_ui_sso.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix: afile_retrieve returns unified ID for batch output files

* fix: batch retrieve returns unified input_file_id

* fix(chatgpt): drop unsupported responses params for Codex

Co-authored-by: Cursor <cursoragent@cursor.com>

* test(chatgpt): ensure Codex request filters unsupported params

Co-authored-by: Cursor <cursoragent@cursor.com>

* Fix deleted managed files returning 403 instead of 404

* Add comments

* Update litellm/proxy/utils.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix: thread deployment model_info through batch cost calculation

batch_cost_calculator only checked the global cost map, ignoring
deployment-level custom pricing (input_cost_per_token_batches etc.).
Add optional model_info param through the batch cost chain and pass
it from CheckBatchCost.

* fix(deps): add pytest-postgresql for db schema migration tests

The test_db_schema_migration.py test requires pytest-postgresql but it was
missing from dependencies, causing import errors:

  ModuleNotFoundError: No module named 'pytest_postgresql'

Added pytest-postgresql ^6.0.0 to dev dependencies to fix test collection
errors in proxy_unit_tests.

This is a pre-existing issue, not related to PR #21277.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* fix(test): replace caplog with custom handler for parallel execution

The cost calculation log level tests were failing when run with pytest-xdist
parallel execution because caplog doesn't work reliably across worker processes.
This causes "ValueError: I/O operation on closed file" errors.

Solution: Replace caplog fixture with a custom LogRecordHandler that directly
attaches to the logger. This approach works correctly in parallel execution
because each worker process has its own handler instance.

Fixes test failures in PR #21277 when running with --dist=loadscope.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* fix(test): correct async mock for video generation logging test

The test was failing with AuthenticationError because the mock wasn't
intercepting the actual HTTP handler calls. This caused real API calls
with no API key, resulting in 401 errors.

Root cause: The test was patching the wrong target using string path
'litellm.videos.main.base_llm_http_handler' instead of using patch.object
on the actual handler instance. Additionally, it was mocking the sync
method instead of async_video_generation_handler.

Solution: Use patch.object with side_effect pattern on the correct
async handler method, following the same pattern used in
test_video_generation_async().

Fixes test failure in PR #21277 when running with --dist=loadscope.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* fix(test): add cleanup fixture and no_parallel mark for MCP tests

Two MCP server tests were failing when run with pytest-xdist parallel
execution (--dist=loadscope):
- test_mcp_routing_with_conflicting_alias_and_group_name
- test_oauth2_headers_passed_to_mcp_client

Both tests showed assertion failures where mocks weren't being called
(0 times instead of expected 1 time).

Root cause: These tests rely on global_mcp_server_manager singleton
state and complex async mocking that doesn't work reliably with
parallel execution. Each worker process can have different state
and patches may not apply correctly.

Solution:
1. Added autouse fixture to clean up global_mcp_server_manager registry
   before and after each test for better isolation
2. Added @pytest.mark.no_parallel to these specific tests to ensure
   they run sequentially, avoiding parallel execution issues

This approach maintains test reliability while allowing other tests
in the file to still benefit from parallelization.

Fixes test failures exposed by PR #21277.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Regenerate poetry.lock with Poetry 2.3.2

Updated lock file to use Poetry 2.3.2 (matching main branch standard).
This addresses Greptile feedback about Poetry version mismatch.

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Remove unused pytest import and add trailing newline

- Removed unused pytest import (caplog fixture was removed)
- Added missing trailing newline at end of file

Addresses Greptile feedback (minor style issues).

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Remove redundant import inside test method

The module litellm.videos.main is already imported at the top of
the file (line 21), so the import inside the test method is redundant.

Addresses Greptile feedback (minor style issue).

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>

* Fix converse anthropic usage object according to v1/messages specs

* Add routing based on if reasoning is supported or not

* add fireworks_ai/accounts/fireworks/models/kimi-k2p5 in model map

* Removed stray .md file

* fix(bedrock): clamp thinking.budget_tokens to minimum 1024

Bedrock rejects thinking.budget_tokens values below 1024 with a 400
error. This adds automatic clamping in the LiteLLM transformation
layer so callers (e.g. router with reasoning_effort="low") don't
need to know about the provider-specific minimum.

Fixes #21297

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

* fix: improve Langfuse test isolation to prevent flaky failures (#21093)

The test was creating fresh mocks but not fully isolating from setUp state,
causing intermittent CI failures with 'Expected generation to be called once.
Called 0 times.'

Instead of creating fresh mocks, properly reset the existing setUp mocks to
ensure clean state while maintaining proper mock chain configuration.

* feat(s3): add support for virtual-hosted-style URLs (#21094)

Add s3_use_virtual_hosted_style parameter to support AWS S3 virtual-hosted-style URL format (bucket.endpoint/key) alongside the existing path-style format (endpoint/bucket/key).

This enables compatibility with S3-compatible services like MinIO and aligns with AWS S3 official terminology.

* Addressed greptile comments to extract common helpers and return 404

* Allow effort="max" for Claude Opus 4.6 (#21112)

* fix(aiohttp): prevent closing shared ClientSession in AiohttpTransport (#21117)

When a shared ClientSession is passed to LiteLLMAiohttpTransport,
calling aclose() on the transport would close the shared session,
breaking other clients still using it.

Add owns_session parameter (default True for backwards compatibility)
to AiohttpTransport and LiteLLMAiohttpTransport. When a shared session
is provided in http_handler.py, owns_session=False is set to prevent
the transport from closing a session it does not own.

This aligns AiohttpTransport with the ownership pattern already used
in AiohttpHandler (aiohttp_handler.py).

* perf(spend): avoid duplicate daily agent transaction computation (#21187)

* fix: proxy/batches_endpoints/endpoints.py:309:11: PLR0915 Too many statements (54 > 50)

* fix mypy

* Add doc for OpenAI Agents SDK with LiteLLM

* Add doc for OpenAI Agents SDK with LiteLLM

* Update docs/my-website/sidebars.js

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix mypy

* Update tests/test_litellm/proxy/_experimental/mcp_server/test_mcp_server.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* Add blog fffor Managing Anthropic Beta Headers

* Add blog fffor Managing Anthropic Beta Headers

* correct the time

* Fix: Exclude tool params for models without function calling support (#21125) (#21244)

* Fix tool params reported as supported for models without function calling (#21125)

JSON-configured providers (e.g. PublicAI) inherited all OpenAI params
including tools, tool_choice, function_call, and functions — even for
models that don't support function calling. This caused an inconsistency
where get_supported_openai_params included "tools" but
supports_function_calling returned False.

The fix checks supports_function_calling in the dynamic config's
get_supported_openai_params and removes tool-related params when the
model doesn't support it. Follows the same pattern used by OVHCloud
and Fireworks AI providers.

* Style: move verbose_logger to module-level import, remove redundant try/except

Address review feedback from Greptile bot:
- Move verbose_logger import to top-level (matches project convention)
- Remove redundant try/except around supports_function_calling() since it
  already handles exceptions internally via _supports_factory()

* fix(index.md): cleanup str

* fix(proxy): handle missing DATABASE_URL in append_query_params (#21239)

* fix: handle missing database url in append_query_params

* Update litellm/proxy/proxy_cli.py

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

---------

Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>

* fix(migrations): Make vector stores migration idempotent with IF NOT EXISTS

- Add IF NOT EXISTS to ALTER TABLE ADD COLUMN statements
- Add IF NOT EXISTS to CREATE INDEX statements
- Prevents migration failures when columns/indexes already exist from manual fixes
- Follows PostgreSQL best practices for idempotent migrations

---------

Co-authored-by: Harshit Jain <harshitjain0562@gmail.com>
Co-authored-by: Harshit Jain <48647625+Harshit28j@users.noreply.github.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Co-authored-by: Ephrim Stanley <ephrim.stanley@point72.com>
Co-authored-by: Jay Prajapati <79649559+jayy-77@users.noreply.github.com>
Co-authored-by: Cursor <cursoragent@cursor.com>
Co-authored-by: Julio Quinteros Pro <jquinter@gmail.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
Co-authored-by: Sameer Kankute <sameer@berri.ai>
Co-authored-by: mjkam <mjkam@naver.com>
Co-authored-by: Fly <48186978+tuzkiyoung@users.noreply.github.com>
Co-authored-by: Kristoffer Arlind <13228507+KristofferArlind@users.noreply.github.com>
Co-authored-by: Constantine <Runixer@gmail.com>
Co-authored-by: Emerson Gomes <emerson.gomes@thalesgroup.com>
Co-authored-by: Atharva Jaiswal <92455570+AtharvaJaiswal005@users.noreply.github.com>
Co-authored-by: Krrish Dholakia <krrishdholakia@gmail.com>
Co-authored-by: Vincent Koc <vincentkoc@ieee.org>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]: EuroLLM shows tool parameter supported even though function calling is not supported

3 participants