fix: support served_model_name for Baseten dedicated deployments #23382
fix: support served_model_name for Baseten dedicated deployments #23382ishaan-jaff merged 59 commits intoBerriAI:litellm_ishaan_march_16from
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Greptile SummaryThis PR adds a Key changes:
One naming-convention concern: every other provider-specific field in Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| litellm/llms/baseten/chat.py | Adds transform_request override that reads served_model_name from litellm_params and substitutes it for the deployment-ID model name before delegating to the parent. Logic is correct and isolated to this provider. |
| litellm/types/router.py | Adds served_model_name to GenericLiteLLMParams and LiteLLMParamsTypedDict. Functionally correct but the field name lacks a baseten_ prefix, inconsistent with every other provider-specific param in this file. |
| tests/test_litellm/llms/baseten/chat/test_baseten_completions.py | New unit tests cover the happy path (with and without served_model_name) and API-base routing. All tests are pure mocks/unit tests — no real network calls. |
| docs/my-website/docs/providers/baseten.md | Documentation updated with a new "Dedicated Deployment" proxy section explaining served_model_name usage. Existing Model API example is preserved. |
Sequence Diagram
sequenceDiagram
participant User
participant LiteLLMProxy
participant BasetenConfig
participant OpenAIGPTConfig
participant BasetenAPI
User->>LiteLLMProxy: chat.completions.create(model="baseten-model")
LiteLLMProxy->>BasetenConfig: transform_request(model="wd1lndkw", litellm_params={served_model_name: "baseten-hosted/zai-org/GLM-5"})
BasetenConfig->>BasetenConfig: served_model_name = litellm_params.get("served_model_name")
BasetenConfig->>BasetenConfig: model = "baseten-hosted/zai-org/GLM-5"
BasetenConfig->>OpenAIGPTConfig: super().transform_request(model="baseten-hosted/zai-org/GLM-5", ...)
OpenAIGPTConfig-->>BasetenConfig: {model: "baseten-hosted/zai-org/GLM-5", messages: [...]}
BasetenConfig->>BasetenAPI: POST https://model-wd1lndkw.api.baseten.co/.../v1 body={model: "baseten-hosted/zai-org/GLM-5"}
BasetenAPI-->>User: chat completion response
Last reviewed commit: c461ab2
…llback The Pydantic default for user_role was INTERNAL_USER, but all runtime provisioning paths (SSO, SCIM, JWT) fall back to INTERNAL_USER_VIEW_ONLY when no settings are saved. This caused the UI to show "Internal User" on fresh instances while new users actually got "Internal Viewer".
Asserts that GET /get/internal_user_settings returns INTERNAL_USER_VIEW_ONLY on a fresh DB with no saved settings, matching the runtime fallback in SSO/SCIM/JWT provisioning.
…endpoints.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
…r-perms-not-synced-with-ui fix: align DefaultInternalUserParams Pydantic default with runtime fallback
Tests added for: UiLoadingSpinner, HashicorpVaultEmptyPlaceholder, PageVisibilitySettings, errorUtils, and mcpToolCrudClassification. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
[Test] UI Dashboard - Add unit tests for 5 untested files
…x_budget updates to admins Non-admin users (INTERNAL_USER) could call /key/block and /key/unblock on arbitrary keys, and modify max_budget on their own keys via /key/update. These endpoints are now restricted to proxy admins, team admins, or org admins. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
* fix(test): add missing mocks for test_streamable_http_mcp_handler_mock The test was missing mocks for extract_mcp_auth_context and set_auth_context, causing the handler to fail silently in the except block instead of reaching session_manager.handle_request. This mirrors the fix already applied to the sibling test_sse_mcp_handler_mock. Co-authored-by: Ishaan Jaff <ishaan-jaff@users.noreply.github.com> * fix(ci): route OpenAI models through chat completions in pass-through tests The test_anthropic_messages_openai_model_streaming_cost_injection test fails because the OpenAI Responses API returns 400 for requests routed through the Anthropic Messages endpoint. Setting LITELLM_USE_CHAT_COMPLETIONS_URL_FOR_ANTHROPIC_MESSAGES=true routes OpenAI models through the stable chat completions path instead. Cost injection still works since it happens at the proxy level. Co-authored-by: Ishaan Jaff <ishaan-jaff@users.noreply.github.com> * fix(ci): fix assemblyai custom auth and router wildcard test flakiness 1. custom_auth_basic.py: Add user_role='proxy_admin' so the custom auth user can access management endpoints like /key/generate. The test test_assemblyai_transcribe_with_non_admin_key was hidden behind an earlier -x failure and was never reached before. 2. test_router_utils.py: Add flaky(retries=3) and increase sleep from 1s to 2s for test_router_get_model_group_usage_wildcard_routes. The async callback needs time to write usage to cache, and 1s is insufficient on slower CI hardware. Co-authored-by: Ishaan Jaff <ishaan-jaff@users.noreply.github.com> * ci: retrigger CI pipeline Co-authored-by: Ishaan Jaff <ishaan-jaff@users.noreply.github.com> * fix(mypy): use LitellmUserRoles enum instead of raw string in custom_auth_basic Fixes mypy error: Argument 'user_role' has incompatible type 'str'; expected 'LitellmUserRoles | None' Co-authored-by: Ishaan Jaff <ishaan-jaff@users.noreply.github.com> * fix: don't close HTTP/SDK clients on LLMClientCache eviction (BerriAI#22926) * fix: don't close HTTP/SDK clients on LLMClientCache eviction Removing the _remove_key override that eagerly called aclose()/close() on evicted clients. Evicted clients may still be held by in-flight streaming requests; closing them causes: RuntimeError: Cannot send a request, as the client has been closed. This is a regression from commit fb72979. Clients that are no longer referenced will be garbage-collected naturally. Explicit shutdown cleanup happens via close_litellm_async_clients(). Fixes production crashes after the 1-hour cache TTL expires. * test: update LLMClientCache unit tests for no-close-on-eviction behavior Flip the assertions: evicted clients must NOT be closed. Replace test_remove_key_closes_async_client → test_remove_key_does_not_close_async_client and equivalents for sync/eviction paths. Add test_remove_key_removes_plain_values for non-client cache entries. Remove test_background_tasks_cleaned_up_after_completion (no more _background_tasks). Remove test_remove_key_no_event_loop variant that depended on old behavior. * test: add e2e tests for OpenAI SDK client surviving cache eviction Add two new e2e tests using real AsyncOpenAI clients: - test_evicted_openai_sdk_client_stays_usable: verifies size-based eviction doesn't close the client - test_ttl_expired_openai_sdk_client_stays_usable: verifies TTL expiry eviction doesn't close the client Both tests sleep after eviction so any create_task()-based close would have time to run, making the regression detectable. Also expand the module docstring to explain why the sleep is required. * docs(AGENTS.md): add rule — never close HTTP/SDK clients on cache eviction * docs(CLAUDE.md): add HTTP client cache safety guideline * [Fix] Install bsdmainutils for column command in security scans The security_scans.sh script uses `column` to format vulnerability output, but the package wasn't installed in the CI environment. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix: handle string callback values in prometheus multiproc setup When callbacks are configured as a plain string (e.g., `callbacks: "my_callback"`) instead of a list, the proxy crashes on startup with: TypeError: can only concatenate str (not "list") to str Normalize each callback setting to a list before concatenating. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * bump: version 1.82.2 → 1.82.3 * fix(test): update test_startup_fails_when_db_setup_fails for opt-in enforcement The --enforce_prisma_migration_check flag is now required to trigger sys.exit(1) on DB migration failure, after BerriAI#23675 flipped the default behavior to warn-and-continue. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> * fix(cost_calculator): use model name for per-request custom pricing when router_model_id has no pricing When custom pricing is passed as per-request kwargs (input_cost_per_token/output_cost_per_token), completion() registers pricing under the model name, but _select_model_name_for_cost_calc was selecting the router deployment hash (which has no pricing data), causing response_cost to be 0.0. Now checks whether the router_model_id entry actually has pricing before preferring it. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Cursor Agent <cursoragent@cursor.com> Co-authored-by: Ishaan Jaff <ishaan-jaff@users.noreply.github.com> Co-authored-by: Ishaan Jaff <ishaanjaffer0324@gmail.com> Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
…lege_escalation_fix [Fix] Privilege Escalation on /key/block, /key/unblock, and /key/update max_budget
Move the non-admin team validation into the existing get_team_object call site to avoid an extra DB round-trip. The existing call already fetches the team for limits checking — we now add the LIT-1884 guard there when team_obj is None for non-admin callers. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…er_invalid_keys [Fix] Prevent Internal Users from Creating Invalid Keys
…changed When updating or regenerating a key without changing its key_alias, the existing alias was being re-validated against current format rules. This caused keys with legacy aliases (created before stricter validation) to become uneditable. Now validation only runs when the alias actually changes. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…lidation_on_update [Fix] Key Alias Re-validation on Update Blocks Legacy Aliases
The test expected fallback to all logs when backend filters return empty, but the source was intentionally changed to show empty results instead of stale data. Updated test to match. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add disable_custom_api_keys UI setting that prevents users from specifying custom key values during key generation and regeneration. When enabled, all keys must be auto-generated, eliminating the risk of key hash collisions in multi-tenant environments. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Without this field on the model, GET /get/ui_settings omits the setting from the response and field_schema, preventing the UI from reading it. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…23752) * fix: Register DynamoAI guardrail initializer and enum entry Fix the "Unsupported guardrail: dynamoai" error by: 1. Adding DYNAMOAI to SupportedGuardrailIntegrations enum 2. Implementing initialize_guardrail() and registries in dynamoai/__init__.py The DynamoAI guardrail was added in PR BerriAI#15920 but never properly registered in the initialization system. The __init__.py was missing the guardrail_initializer_registry and guardrail_class_registry dictionaries that the dynamic discovery mechanism looks for at module load time. Fixes BerriAI#22773 Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com> * Update litellm/proxy/guardrails/guardrail_hooks/dynamoai/__init__.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * Update litellm/proxy/guardrails/guardrail_hooks/dynamoai/__init__.py Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com> * test: Add tests for DynamoAI guardrail registration Verifies enum entry, initializer registry, class registry, instance creation, and global registry discovery. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com> --------- Co-authored-by: Claude Haiku 4.5 <noreply@anthropic.com> Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
Adds a toggle switch to the admin UI Settings page so administrators can enable/disable custom API key values without making direct API calls. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…_support…" (BerriAI#23817) This reverts commit 9661249.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Aggregated endpoint returns empty breakdown.entities; fall back to grouping breakdown.api_keys by team_id.
…api_keys [Feature] Disable Custom Virtual Key Values via UI Setting
fix(ui): CSV export empty on Global Usage page
…akage fix: langfuse trace leak key on model params
[Infra] Merge personal dev branch with daily dev branch
…ngfuse_key_leakage Revert "fix: langfuse trace leak key on model params"
…_16_2026 [Infra] Merge daily dev branch with main
Litellm ryan's daily branch march 16
Baseten dedicated deployments use an 8-char deployment ID for URL routing, but the vLLM server may expect a different model name in the request body (e.g. baseten-hosted/zai-org/GLM-5 vs wd1lndkw). Add served_model_name litellm_param to override the model field in the request body, and declare it in LiteLLMParamsTypedDict and GenericLiteLLMParams for IDE support. Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
| use_in_pass_through: Optional[bool] = False | ||
| use_litellm_proxy: Optional[bool] = False | ||
| ## BASETEN ## | ||
| served_model_name: Optional[str] = None # override model name in request body (e.g. Baseten dedicated deployments) |
There was a problem hiding this comment.
Missing provider prefix on
served_model_name
All other provider-specific fields in GenericLiteLLMParams use a provider prefix (e.g. vertex_project, aws_access_key_id, watsonx_region_name). served_model_name carries a ## BASETEN ## comment indicating it is Baseten-specific, yet it has no baseten_ prefix.
This creates two risks:
- Another provider (e.g. any future vLLM-based integration) could independently define
served_model_namewith slightly different semantics, causing silent conflicts. - Users configuring non-Baseten deployments may accidentally set
served_model_nameand observe unexpected request-body overrides becauseBasetenConfig.transform_requestchecks for this name without verifying the provider.
Consider renaming to baseten_served_model_name (matching the existing convention) and updating the comment in LiteLLMParamsTypedDict (line 351) and baseten/chat.py (line 97) accordingly.
| served_model_name: Optional[str] = None # override model name in request body (e.g. Baseten dedicated deployments) | |
| served_model_name: Optional[str] = None # override model name in request body (e.g. Baseten dedicated deployments) |
742b6be
into
BerriAI:litellm_ishaan_march_16
Summary
vLLM server may expect a different model name in the request body (e.g.
baseten-hosted/zai-org/GLM-5 vs 1234abcd)
model name — users would either get routing errors or 404s
the request body while keeping the deployment ID for URL construction
Test plan
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/test_litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unit@greptileaiand received a Confidence Score of at least 4/5 before requesting a maintainer reviewCI (LiteLLM team)
Branch creation CI run
Link:
CI run for the last commit
Link:
Merge / cherry-pick CI run
Links:
Type
🆕 New Feature
🐛 Bug Fix
🧹 Refactoring
📖 Documentation
🚄 Infrastructure
✅ Test
Changes