fix: add missing capability fields for gpt-5.4 variants#23645
fix: add missing capability fields for gpt-5.4 variants#23645
Conversation
gpt-5.4-2026-03-05 and all azure/chatgpt gpt-5.4 variants were missing supports_none_reasoning_effort and supports_xhigh_reasoning_effort fields, causing _supports_factory lookups to silently return False and break reasoning_effort and temperature handling for those models. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: b16aeb34de
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| "supports_none_reasoning_effort": false, | ||
| "supports_xhigh_reasoning_effort": true, | ||
| "supports_none_reasoning_effort": true, | ||
| "supports_xhigh_reasoning_effort": true, |
There was a problem hiding this comment.
Remove conflicting duplicate capability flags
This change adds supports_none_reasoning_effort and supports_xhigh_reasoning_effort twice (with contradictory values) inside the global.anthropic.claude-sonnet-4-5-20250929-v1:0 object, even though the commit is scoped to GPT-5.4 variants. Duplicate JSON keys are parser-dependent (first wins, last wins, or rejection), so this can make Claude capability detection inconsistent across environments and accidentally alter reasoning-effort behavior for this Bedrock model.
Useful? React with 👍 / 👎.
Greptile SummaryThis PR adds Changes:
Confidence Score: 2/5
|
| Filename | Overview |
|---|---|
| model_prices_and_context_window.json | Adds supports_none_reasoning_effort and supports_xhigh_reasoning_effort to 6 model entries correctly, but introduces duplicate JSON keys in the global.anthropic.claude-sonnet-4-5-20250929-v1:0 entry (a Claude model unrelated to gpt-5.4), which is invalid JSON per RFC 8259. |
| litellm/model_prices_and_context_window_backup.json | Identical change set as the primary JSON file — same correct gpt-5.4 additions and same duplicate-key bug in the global.anthropic.claude-sonnet-4-5-20250929-v1:0 Bedrock Claude entry. |
Flowchart
%%{init: {'theme': 'neutral'}}%%
flowchart TD
A[gpt-5.4 model request] --> B{supports_none_reasoning_effort?}
B -->|lookup via get_model_info| C[model_prices_and_context_window.json]
C --> D{Model entry}
D -->|azure/gpt-5.4| E[true]
D -->|azure/gpt-5.4-2026-03-05| F[true]
D -->|azure/gpt-5.4-pro| G[false]
D -->|azure/gpt-5.4-pro-2026-03-05| H[false]
D -->|gpt-5.4-2026-03-05| I[true]
D -->|global.anthropic.claude-sonnet-4-5| J["❌ DUPLICATE KEY\nfalse → true (last wins)"]
E & F & I --> K[reasoning_effort + temperature handling works]
G & H --> L[xhigh supported, none blocked]
J --> M[JSON spec violation - may break strict parsers]
Last reviewed commit: b16aeb3
| "supports_none_reasoning_effort": false, | ||
| "supports_xhigh_reasoning_effort": true, | ||
| "supports_none_reasoning_effort": true, | ||
| "supports_xhigh_reasoning_effort": true, |
There was a problem hiding this comment.
Duplicate JSON keys for wrong model entry
The global.anthropic.claude-sonnet-4-5-20250929-v1:0 model entry now contains two conflicting entries for both supports_none_reasoning_effort and supports_xhigh_reasoning_effort:
"supports_none_reasoning_effort": false,
"supports_xhigh_reasoning_effort": true,
"supports_none_reasoning_effort": true,
"supports_xhigh_reasoning_effort": true,Duplicate keys are not valid per the JSON specification (RFC 8259). While Python's json.loads() silently uses the last-defined value (true), strict JSON parsers will reject this, and linters/validators will flag it. The intended final state appears to be true for both, so the first duplicate pair should be removed:
| "supports_none_reasoning_effort": false, | |
| "supports_xhigh_reasoning_effort": true, | |
| "supports_none_reasoning_effort": true, | |
| "supports_xhigh_reasoning_effort": true, | |
| "supports_none_reasoning_effort": true, | |
| "supports_xhigh_reasoning_effort": true, |
Additionally, this model (global.anthropic.claude-sonnet-4-5-20250929-v1:0) is a Claude Sonnet 4.5 Bedrock entry, not a gpt-5.4 variant — it appears to have been modified unintentionally by this PR. The same duplicate-key issue exists at the same line range in litellm/model_prices_and_context_window_backup.json.
| "supports_none_reasoning_effort": false, | ||
| "supports_xhigh_reasoning_effort": true, | ||
| "supports_none_reasoning_effort": true, | ||
| "supports_xhigh_reasoning_effort": true, |
There was a problem hiding this comment.
Same duplicate-key bug in backup file
This file has the identical duplicate-key problem for global.anthropic.claude-sonnet-4-5-20250929-v1:0:
"supports_none_reasoning_effort": false,
"supports_xhigh_reasoning_effort": true,
"supports_none_reasoning_effort": true,
"supports_xhigh_reasoning_effort": true,The fix here is the same — remove the first (false) pair and retain only the intended values:
| "supports_none_reasoning_effort": false, | |
| "supports_xhigh_reasoning_effort": true, | |
| "supports_none_reasoning_effort": true, | |
| "supports_xhigh_reasoning_effort": true, | |
| "supports_none_reasoning_effort": true, | |
| "supports_xhigh_reasoning_effort": true, |
Summary
supports_none_reasoning_effortandsupports_xhigh_reasoning_effortfields togpt-5.4-2026-03-05and all Azure/ChatGPT gpt-5.4 variants_supports_factorylookups returnFalse, breaking reasoning_effort and temperature handling for users pinning to these model snapshots/providersRelevant issues
Follow-up fix for #22953
Pre-Submission checklist
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unitType
🐛 Bug Fix
Changes
Added
supports_none_reasoning_effortandsupports_xhigh_reasoning_effortto 7 model entries in bothmodel_prices_and_context_window.jsonand its backup copy.🤖 Generated with Claude Code