-
-
Notifications
You must be signed in to change notification settings - Fork 6.5k
fix(mypy): fix type: ignore placement for OTEL LogRecord import (#20351) #20477
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Closed
Changes from all commits
Commits
Show all changes
3 commits
Select commit
Hold shift + click to select a range
424e1bb
fix(mypy): fix type: ignore placement for OTEL LogRecord import (#20351)
jquinter ba5275d
Revert "fix(mypy): fix type: ignore placement for OTEL LogRecord impo…
krrishdholakia bf214c1
fix(otel): guard against None message in set_attributes tool_calls (#…
Harshit28j File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
There are no files selected for viewing
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,15 @@ | ||
| name: Validate model_prices_and_context_window.json | ||
|
|
||
| on: | ||
| pull_request: | ||
| branches: [ main ] | ||
|
|
||
| jobs: | ||
| validate-model-prices-json: | ||
| runs-on: ubuntu-latest | ||
| steps: | ||
| - uses: actions/checkout@v4 | ||
|
|
||
| - name: Validate model_prices_and_context_window.json | ||
| run: | | ||
| jq empty model_prices_and_context_window.json | ||
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,12 @@ | ||
| # LiteLLM Trivy Ignore File | ||
| # CVEs listed here are temporarily allowlisted pending fixes | ||
|
|
||
| # Next.js vulnerabilities in UI dashboard (next@14.2.35) | ||
| # Allowlisted: 2026-01-31, 7-day fix timeline | ||
| # Fix: Upgrade to Next.js 15.5.10+ or 16.1.5+ | ||
|
|
||
| # HIGH: DoS via request deserialization | ||
| GHSA-h25m-26qc-wcjf | ||
|
|
||
| # MEDIUM: Image Optimizer DoS | ||
| CVE-2025-59471 |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,144 @@ | ||
| # Claude Agent SDK with LiteLLM Gateway | ||
|
|
||
| A simple example showing how to use Claude's Agent SDK with LiteLLM as a proxy. This lets you use any LLM provider (OpenAI, Bedrock, Azure, etc.) through the Agent SDK. | ||
|
|
||
| ## Quick Start | ||
|
|
||
| ### 1. Install dependencies | ||
|
|
||
| ```bash | ||
| pip install anthropic claude-agent-sdk litellm | ||
| ``` | ||
|
|
||
| ### 2. Start LiteLLM proxy | ||
|
|
||
| ```bash | ||
| # Simple start with Claude | ||
| litellm --model claude-sonnet-4-20250514 | ||
|
|
||
| # Or with a config file | ||
| litellm --config config.yaml | ||
| ``` | ||
|
|
||
| ### 3. Run the chat | ||
|
|
||
| **Basic Agent (no MCP):** | ||
|
|
||
| ```bash | ||
| python main.py | ||
| ``` | ||
|
|
||
| **Agent with MCP (DeepWiki2 for research):** | ||
|
|
||
| ```bash | ||
| python agent_with_mcp.py | ||
| ``` | ||
|
|
||
| If MCP connection fails, you can disable it: | ||
|
|
||
| ```bash | ||
| USE_MCP=false python agent_with_mcp.py | ||
| ``` | ||
|
|
||
| That's it! You can now chat with the agent in your terminal. | ||
|
|
||
| ### Chat Commands | ||
|
|
||
| While chatting, you can use these commands: | ||
| - `models` - List all available models (fetched from your LiteLLM proxy) | ||
| - `model` - Switch to a different model | ||
| - `clear` - Start a new conversation | ||
| - `quit` or `exit` - End the chat | ||
|
|
||
| The chat automatically fetches available models from your LiteLLM proxy's `/models` endpoint, so you'll always see what's currently configured. | ||
|
|
||
| ## Configuration | ||
|
|
||
| Set these environment variables if needed: | ||
|
|
||
| ```bash | ||
| export LITELLM_PROXY_URL="http://localhost:4000" | ||
| export LITELLM_API_KEY="sk-1234" | ||
| export LITELLM_MODEL="bedrock-claude-sonnet-4.5" | ||
| ``` | ||
|
|
||
| Or just use the defaults - it'll connect to `http://localhost:4000` by default. | ||
|
|
||
| ## Files | ||
|
|
||
| - `main.py` - Basic interactive agent without MCP | ||
| - `agent_with_mcp.py` - Agent with MCP server integration (DeepWiki2) | ||
| - `common.py` - Shared utilities and functions | ||
| - `config.example.yaml` - Example LiteLLM configuration | ||
| - `requirements.txt` - Python dependencies | ||
|
|
||
| ## Example Config File | ||
|
|
||
| If you want to use multiple models, create a `config.yaml` (see `config.example.yaml`): | ||
|
|
||
| ```yaml | ||
| model_list: | ||
| - model_name: bedrock-claude-sonnet-4 | ||
| litellm_params: | ||
| model: "bedrock/us.anthropic.claude-sonnet-4-20250514-v1:0" | ||
| aws_region_name: "us-east-1" | ||
|
|
||
| - model_name: bedrock-claude-sonnet-4.5 | ||
| litellm_params: | ||
| model: "bedrock/us.anthropic.claude-sonnet-4-5-20250929-v1:0" | ||
| aws_region_name: "us-east-1" | ||
| ``` | ||
|
|
||
| Then start LiteLLM with: `litellm --config config.yaml` | ||
|
|
||
| ## How It Works | ||
|
|
||
| The key is pointing the Agent SDK to LiteLLM instead of directly to Anthropic: | ||
|
|
||
| ```python | ||
| # Point to LiteLLM gateway (not Anthropic) | ||
| os.environ["ANTHROPIC_BASE_URL"] = "http://localhost:4000" | ||
| os.environ["ANTHROPIC_API_KEY"] = "sk-1234" # Your LiteLLM key | ||
|
|
||
| # Use any model configured in LiteLLM | ||
| options = ClaudeAgentOptions( | ||
| model="bedrock-claude-sonnet-4", # or gpt-4, or anything else | ||
| system_prompt="You are a helpful assistant.", | ||
| max_turns=50, | ||
| ) | ||
| ``` | ||
|
|
||
| Note: Don't add `/anthropic` to the base URL - LiteLLM handles the routing automatically. | ||
|
|
||
| ## Why Use This? | ||
|
|
||
| - **Switch providers easily**: Use the same code with OpenAI, Bedrock, Azure, etc. | ||
| - **Cost tracking**: LiteLLM tracks spending across all your agent conversations | ||
| - **Rate limiting**: Set budgets and limits on your agent usage | ||
| - **Load balancing**: Distribute requests across multiple API keys or regions | ||
| - **Fallbacks**: Automatically retry with a different model if one fails | ||
|
|
||
| ## Troubleshooting | ||
|
|
||
| **Connection errors?** | ||
| - Make sure LiteLLM is running: `litellm --model your-model` | ||
| - Check the URL is correct (default: `http://localhost:4000`) | ||
|
|
||
| **Authentication errors?** | ||
| - Verify your LiteLLM API key is correct | ||
| - Make sure the model is configured in your LiteLLM setup | ||
|
|
||
| **Model not found?** | ||
| - Check the model name matches what's in your LiteLLM config | ||
| - Run `litellm --model your-model` to test it works | ||
|
|
||
| **Agent with MCP stuck or failing?** | ||
| - The MCP server might not be available at `http://localhost:4000/mcp/deepwiki2` | ||
| - Try disabling MCP: `USE_MCP=false python agent_with_mcp.py` | ||
| - Or use the basic agent: `python main.py` | ||
|
|
||
| ## Learn More | ||
|
|
||
| - [LiteLLM Docs](https://docs.litellm.ai/) | ||
| - [Claude Agent SDK](https://github.com/anthropics/anthropic-agent-sdk) | ||
| - [LiteLLM Proxy Guide](https://docs.litellm.ai/docs/proxy/quick_start) |
Oops, something went wrong.
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Check warning
Code scanning / CodeQL
Workflow does not contain permissions Medium test
Copilot Autofix
AI about 1 month ago
In general, the fix is to explicitly define a
permissionsblock either at the workflow root (affecting all jobs) or within the specific job, and to grant only the scopes actually required. This workflow only checks out code and runsjqon a file, so it only needs read access to repository contents, i.e.,contents: read.The best minimal fix without changing functionality is to add a
permissionsblock at the workflow root, just below thename:(or abovejobs:). This will apply to all jobs (currently justvalidate-model-prices-json) and restrict theGITHUB_TOKENto read-only access to repository contents. No imports or other structural changes are needed; we only modify.github/workflows/test-model-map.yamlby inserting:right after the
name:line.