adding together ai models to litellm models json#20319
adding together ai models to litellm models json#20319Sameerlite merged 2 commits intoBerriAI:mainfrom
Conversation
|
The latest updates on your projects. Learn more about Vercel for GitHub.
|
Greptile OverviewGreptile SummaryThis PR adds two new Together AI model configurations to
The PR description states this fixes function calling in Key observations:
Missing requirements:
Confidence Score: 4/5
|
| Filename | Overview |
|---|---|
| model_prices_and_context_window.json | Added two Together AI model configurations: zai-org/GLM-4.7 and moonshotai/Kimi-K2.5 with function calling support |
Sequence Diagram
sequenceDiagram
participant User as User/Application
participant LiteLLM as LiteLLM Core
participant Config as model_prices_and_context_window.json
participant TogetherAI as Together AI Provider
participant API as Together AI API
User->>LiteLLM: completion(model="together_ai/moonshotai/Kimi-K2.5")
LiteLLM->>Config: Lookup model configuration
Config-->>LiteLLM: Return model config (supports_function_calling=true)
LiteLLM->>TogetherAI: Transform request with tool parameters
TogetherAI->>API: Forward chat completion with tools
API-->>TogetherAI: Response with function calls
TogetherAI-->>LiteLLM: Transform response to OpenAI format
LiteLLM-->>User: Standardized completion with function calls
Note over User,API: Same flow applies for GLM-4.7 model
User->>LiteLLM: completion(model="together_ai/zai-org/GLM-4.7")
LiteLLM->>Config: Lookup model configuration
Config-->>LiteLLM: Return config (supports_parallel_function_calling=true)
LiteLLM->>TogetherAI: Transform with parallel tool support
TogetherAI->>API: Forward request
API-->>TogetherAI: Response
TogetherAI-->>LiteLLM: Standardized response
LiteLLM-->>User: Completion result
| "together_ai/moonshotai/Kimi-K2.5": { | ||
| "input_cost_per_token": 5e-07, | ||
| "litellm_provider": "together_ai", | ||
| "max_input_tokens": 256000, | ||
| "max_output_tokens": 256000, | ||
| "max_tokens": 256000, | ||
| "mode": "chat", | ||
| "output_cost_per_token": 2.8e-06, | ||
| "source": "https://www.together.ai/models/kimi-k2-5", | ||
| "supports_function_calling": true, | ||
| "supports_tool_choice": true, | ||
| "supports_vision": true, | ||
| "supports_reasoning": true | ||
| }, |
There was a problem hiding this comment.
Check if Kimi-K2.5 should include supports_parallel_function_calling
The similar Kimi-K2-Instruct-0905 model includes supports_parallel_function_calling (line 27152), and the newly added GLM-4.7 also has it. Verify if this model should have it too for consistency.
Prompt To Fix With AI
This is a comment left during a code review.
Path: model_prices_and_context_window.json
Line: 27130:27143
Comment:
Check if `Kimi-K2.5` should include `supports_parallel_function_calling`
The similar `Kimi-K2-Instruct-0905` model includes `supports_parallel_function_calling` (line 27152), and the newly added `GLM-4.7` also has it. Verify if this model should have it too for consistency.
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
Not sure is this parameter is supported. I followed the other kimi-k2.5 configuration from moonshotai.
Relevant issues
N/A
Pre-Submission checklist
Please complete all items before asking a LiteLLM maintainer to review your PR
tests/litellm/directory, Adding at least 1 test is a hard requirement - see detailsmake test-unitCI (LiteLLM team)
Branch creation CI run
Link:
CI run for the last commit
Link:
Merge / cherry-pick CI run
Links:
Type
🐛 Bug Fix
Changes
This fixes function calling in kimi-k2.5 and glm-4.7 for together_ai provider specially in claude code.