fix: add missing OpenAI chat completion params to OPENAI_CHAT_COMPLETION_PARAMS#21360
Conversation
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
- Fix HTTPException swallowed by broad except block in get_user_daily_activity and get_user_daily_activity_aggregated: re-raise HTTPException before the generic handler so 403 status codes propagate correctly - Add status_code assertions in non-admin access tests Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Default user_id to caller's own ID for non-admins instead of 403 when omitted, preserving backward compatibility for API consumers - Apply same fix to aggregated endpoint - Update test to verify defaulting behavior instead of expecting 403 - Add useEffect to sync selectedUserId when auth state settles in UsagePageView to handle async auth initialization Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…llm into litellm_gh_server_root_test
[Feature] UI - Usage: Allow Filtering by User
…21349) * feat: add GuardrailTracingDetail TypedDict and tracing fields to StandardLoggingGuardrailInformation * feat: add policy_template field to Guardrail config TypedDict * feat: accept GuardrailTracingDetail in base guardrail logging method * feat: populate tracing fields in content filter guardrail * test: add tracing fields tests for custom guardrail base class * test: add tracing fields e2e tests for content filter guardrail * feat: add guardrail tracing UI - policy badges, match details, timeline * feat: redesign GuardrailViewer to Guardrails & Policy Compliance layout Two-column layout with request lifecycle timeline on the left and compact evaluation detail cards on the right. Header shows guardrail count, pass/fail status, total overhead, policy info, and an export button. * feat: add clickable guardrail link in metrics + show policy names * feat: add risk_score field to StandardLoggingGuardrailInformation * feat: compute risk_score in content filter guardrail * feat: display backend risk_score badge on evaluation cards * fix: fallback to frontend risk score when backend doesn't provide one
Fix: Add blog as incident report
…ude-opus-4.6-fast (#21316) Add missing GitHub Copilot model entries for gpt-5.3-codex (GA) and claude-opus-4.6-fast (Public Preview) to both the root and backup model pricing JSON files.
[Infra] v1.81.13-nightly Change Copy to main
[Infra] Add Server Root Test to GitHub Actions
…ION_PARAMS Add store, prompt_cache_key, prompt_cache_retention, safety_identifier, and verbosity to OPENAI_CHAT_COMPLETION_PARAMS list. These params were already in DEFAULT_CHAT_COMPLETION_PARAM_VALUES but missing from the OPENAI_CHAT_COMPLETION_PARAMS list, causing them to be dropped when passed to OpenAI-compatible providers.
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
|
|
…ssing_openai_params_20260217
Greptile SummaryAdds 5 missing OpenAI chat completion parameters (
Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| litellm/constants.py | Adds 5 missing OpenAI chat completion params (store, prompt_cache_key, prompt_cache_retention, safety_identifier, verbosity) to OPENAI_CHAT_COMPLETION_PARAMS. All params are already present in DEFAULT_CHAT_COMPLETION_PARAM_VALUES. No issues with the additions themselves. |
Flowchart
flowchart TD
A["Client sends chat completion request\nwith params like store, verbosity, etc."] --> B["litellm.completion()"]
B --> C{"Filter params using\nOPENAI_CHAT_COMPLETION_PARAMS"}
C -->|"Before fix:\nparams dropped"| D["Params missing from list\n→ silently removed"]
C -->|"After fix:\nparams retained"| E["Params in list\n→ passed to provider"]
E --> F["get_standard_openai_params()"]
E --> G["LiteLLMProxyChatConfig\n.get_supported_openai_params()"]
E --> H["HumanloopPromptManager"]
E --> I["SpeechToCompletionBridge"]
F --> J["OpenAI API / Compatible Provider"]
G --> J
H --> J
I --> J
Last reviewed commit: 4ca2cef
What
Add 5 missing OpenAI chat completion parameters to
OPENAI_CHAT_COMPLETION_PARAMSinlitellm/constants.py:storeprompt_cache_keyprompt_cache_retentionsafety_identifierverbosityWhy
These params are defined in the OpenAI Chat Completions API and already present in
DEFAULT_CHAT_COMPLETION_PARAM_VALUES, but were missing fromOPENAI_CHAT_COMPLETION_PARAMS. This means they get dropped when filtering allowed params for OpenAI-compatible providers.How
Added the 5 params to the
OPENAI_CHAT_COMPLETION_PARAMSlist.Similar to #21195.