Skip to content

fix: add missing OpenAI chat completion params to OPENAI_CHAT_COMPLETION_PARAMS#21360

Merged
28 commits merged intolitellm_oss_staging_02_17_2026from
litellm_fix_add_missing_openai_params_20260217
Feb 17, 2026
Merged

fix: add missing OpenAI chat completion params to OPENAI_CHAT_COMPLETION_PARAMS#21360
28 commits merged intolitellm_oss_staging_02_17_2026from
litellm_fix_add_missing_openai_params_20260217

Conversation

@ghost
Copy link
Copy Markdown

@ghost ghost commented Feb 17, 2026

What

Add 5 missing OpenAI chat completion parameters to OPENAI_CHAT_COMPLETION_PARAMS in litellm/constants.py:

  • store
  • prompt_cache_key
  • prompt_cache_retention
  • safety_identifier
  • verbosity

Why

These params are defined in the OpenAI Chat Completions API and already present in DEFAULT_CHAT_COMPLETION_PARAM_VALUES, but were missing from OPENAI_CHAT_COMPLETION_PARAMS. This means they get dropped when filtering allowed params for OpenAI-compatible providers.

How

Added the 5 params to the OPENAI_CHAT_COMPLETION_PARAMS list.

Similar to #21195.

yuneng-jiang and others added 27 commits February 16, 2026 16:46
Co-authored-by: greptile-apps[bot] <165735046+greptile-apps[bot]@users.noreply.github.com>
- Fix HTTPException swallowed by broad except block in get_user_daily_activity
  and get_user_daily_activity_aggregated: re-raise HTTPException before the
  generic handler so 403 status codes propagate correctly
- Add status_code assertions in non-admin access tests

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Default user_id to caller's own ID for non-admins instead of 403 when
  omitted, preserving backward compatibility for API consumers
- Apply same fix to aggregated endpoint
- Update test to verify defaulting behavior instead of expecting 403
- Add useEffect to sync selectedUserId when auth state settles in
  UsagePageView to handle async auth initialization

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
[Feature] UI - Usage: Allow Filtering by User
…21349)

* feat: add GuardrailTracingDetail TypedDict and tracing fields to StandardLoggingGuardrailInformation

* feat: add policy_template field to Guardrail config TypedDict

* feat: accept GuardrailTracingDetail in base guardrail logging method

* feat: populate tracing fields in content filter guardrail

* test: add tracing fields tests for custom guardrail base class

* test: add tracing fields e2e tests for content filter guardrail

* feat: add guardrail tracing UI - policy badges, match details, timeline

* feat: redesign GuardrailViewer to Guardrails & Policy Compliance layout

Two-column layout with request lifecycle timeline on the left
and compact evaluation detail cards on the right. Header shows
guardrail count, pass/fail status, total overhead, policy info,
and an export button.

* feat: add clickable guardrail link in metrics + show policy names

* feat: add risk_score field to StandardLoggingGuardrailInformation

* feat: compute risk_score in content filter guardrail

* feat: display backend risk_score badge on evaluation cards

* fix: fallback to frontend risk score when backend doesn't provide one
…ude-opus-4.6-fast (#21316)

Add missing GitHub Copilot model entries for gpt-5.3-codex (GA) and
claude-opus-4.6-fast (Public Preview) to both the root and backup
model pricing JSON files.
[Infra] v1.81.13-nightly Change Copy to main
[Infra] Add Server Root Test to GitHub Actions
…ION_PARAMS

Add store, prompt_cache_key, prompt_cache_retention, safety_identifier, and verbosity
to OPENAI_CHAT_COMPLETION_PARAMS list.

These params were already in DEFAULT_CHAT_COMPLETION_PARAM_VALUES but missing from
the OPENAI_CHAT_COMPLETION_PARAMS list, causing them to be dropped when passed to
OpenAI-compatible providers.
@vercel
Copy link
Copy Markdown

vercel bot commented Feb 17, 2026

The latest updates on your projects. Learn more about Vercel for GitHub.

Project Deployment Actions Updated (UTC)
litellm Ready Ready Preview, Comment Feb 17, 2026 4:33am

Request Review

@CLAassistant
Copy link
Copy Markdown

CLAassistant commented Feb 17, 2026

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution.
5 out of 6 committers have signed the CLA.

✅ ishaan-jaff
✅ yuneng-jiang
✅ Chesars
✅ Sameerlite
✅ krrishdholakia
❌ shin-bot-litellm
You have signed the CLA already but the status is still pending? Let us recheck it.

@ghost ghost changed the base branch from main to litellm_oss_staging_02_17_2026 February 17, 2026 04:30
@ghost ghost merged commit b609f58 into litellm_oss_staging_02_17_2026 Feb 17, 2026
2 of 4 checks passed
@ghost ghost deleted the litellm_fix_add_missing_openai_params_20260217 branch February 17, 2026 04:31
@greptile-apps
Copy link
Copy Markdown
Contributor

greptile-apps bot commented Feb 17, 2026

Greptile Summary

Adds 5 missing OpenAI chat completion parameters (store, prompt_cache_key, prompt_cache_retention, safety_identifier, verbosity) to the OPENAI_CHAT_COMPLETION_PARAMS list in litellm/constants.py. These parameters were already defined in DEFAULT_CHAT_COMPLETION_PARAM_VALUES but were missing from OPENAI_CHAT_COMPLETION_PARAMS, causing them to be silently dropped when filtering allowed params for OpenAI-compatible providers (via get_standard_openai_params in litellm/utils.py, LiteLLMProxyChatConfig, HumanloopPromptManager, and the speech-to-completion bridge).

  • Adds store, prompt_cache_key, prompt_cache_retention, safety_identifier, and verbosity to OPENAI_CHAT_COMPLETION_PARAMS
  • No tests were added to verify the fix. The PR template requires at least 1 test in tests/litellm/.

Confidence Score: 4/5

  • This PR is safe to merge — it only appends entries to a constants list, aligning it with an already-defined sibling dict.
  • The change is minimal, additive, and aligns OPENAI_CHAT_COMPLETION_PARAMS with DEFAULT_CHAT_COMPLETION_PARAM_VALUES. All 5 params are well-established in the codebase. No behavioral regressions are expected. Deducted 1 point for the absence of a test verifying the fix.
  • No files require special attention.

Important Files Changed

Filename Overview
litellm/constants.py Adds 5 missing OpenAI chat completion params (store, prompt_cache_key, prompt_cache_retention, safety_identifier, verbosity) to OPENAI_CHAT_COMPLETION_PARAMS. All params are already present in DEFAULT_CHAT_COMPLETION_PARAM_VALUES. No issues with the additions themselves.

Flowchart

flowchart TD
    A["Client sends chat completion request\nwith params like store, verbosity, etc."] --> B["litellm.completion()"]
    B --> C{"Filter params using\nOPENAI_CHAT_COMPLETION_PARAMS"}
    C -->|"Before fix:\nparams dropped"| D["Params missing from list\n→ silently removed"]
    C -->|"After fix:\nparams retained"| E["Params in list\n→ passed to provider"]
    E --> F["get_standard_openai_params()"]
    E --> G["LiteLLMProxyChatConfig\n.get_supported_openai_params()"]
    E --> H["HumanloopPromptManager"]
    E --> I["SpeechToCompletionBridge"]
    F --> J["OpenAI API / Compatible Provider"]
    G --> J
    H --> J
    I --> J
Loading

Last reviewed commit: 4ca2cef

Copy link
Copy Markdown
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 file reviewed, no comments

Edit Code Review Agent Settings | Greptile

This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants