diff --git a/docs/my-website/release_notes/v1.81.6.md b/docs/my-website/release_notes/v1.81.6.md index 777ffb960c0..ef19276f2cf 100644 --- a/docs/my-website/release_notes/v1.81.6.md +++ b/docs/my-website/release_notes/v1.81.6.md @@ -1,5 +1,5 @@ --- -title: "v1.81.6 - Enhanced Model Support, RAG API, and Performance Improvements" +title: "v1.81.6 - Logs v2 with Tool Call Tracing" slug: "v1-81-6" date: 2026-01-31T00:00:00 authors: @@ -18,6 +18,7 @@ hide_table_of_contents: false import Tabs from '@theme/Tabs'; import TabItem from '@theme/TabItem'; +import Image from '@theme/IdealImage'; @@ -41,46 +42,20 @@ pip install litellm==1.81.6 ## Key Highlights -Claude Agents SDK Integration - Native support for Claude Agent SDK on /messages endpoint with MCP tools integration. - -RAG API with S3 Vector Store - New /rag/ingest and /vector_store/search endpoints with S3 storage and PDF support. - -Logs View v2 - Redesigned logs interface with side panel, tool visualization, and error message search. - -5 New Models - Amazon Nova 2 Pro Preview, Gemini Robotics-ER 1.5 Preview, and 3 OpenRouter models added. - -Critical Performance Fixes - Resolved high CPU usage in Prometheus, optimized Presidio connections, and fixed cache stampede. +Logs View v2 with Tool Call Tracing - Redesigned logs interface with side panel, structured tool visualization, and error message search for faster debugging. Let's dive in. -### Claude Agents SDK Integration - -This release brings native support for Claude Agents SDK through LiteLLM AI Gateway, enabling AI agents that use Model Context Protocol (MCP) tools seamlessly. - -This means you can now onboard use cases like building autonomous agents that access GitHub, Jira, Linear, and custom MCP servers while maintaining authentication, rate limiting, and spend tracking. - -Developers can access Claude Agents SDK through LiteLLM's /messages endpoint to build and monitor agent operations with progress notifications. - -[Get Started](../../docs/mcp) - -### RAG API with S3 Vector Store - -This release introduces RAG (Retrieval-Augmented Generation) capabilities with S3 vector store integration, allowing you to build production-ready document search systems. - -As a LiteLLM Gateway Admin or Developer, you can now do the following: -- Document Upload - Ingest PDFs, docs, and text files through the UI or /rag/ingest API -- S3 Vector Storage - Store embeddings in S3 for cost-effective, scalable vector search -- Permission Management - Control access to vector stores by team and user for multi-tenant applications - -To use it, simply upload your documents via the /rag/ingest endpoint, and LiteLLM will handle chunking, embedding generation, and vector storage automatically. +### Logs View v2 with Tool Call Tracing -[Get Started](../../docs/rag_ingest) +This release introduces comprehensive tool call tracing through LiteLLM's redesigned Logs View v2, enabling developers to debug and monitor AI agent workflows in production environments seamlessly. -### Logs View v2 +This means you can now onboard use cases like tracing complex multi-step agent interactions, debugging tool execution failures, and monitoring MCP server calls while maintaining full visibility into request/response payloads with syntax highlighting. -This release introduces a redesigned logs interface for LiteLLM AI Gateway, allowing AI Gateway Admins to debug production issues faster. +Developers can access the new Logs View through LiteLLM's UI to inspect tool calls in structured format, search logs by error messages or request patterns, and correlate agent activities across sessions with collapsible side panel views. -This means you can now see tool calls in structured format, filter logs by error messages or request patterns, and view request/response payloads with syntax highlighting and collapsible sections. +{/* TODO: Add image from Slack (group_7219.png) - save as logs_v2_tool_tracing.png */} +{/* */} [Get Started](../../docs/proxy/ui_logs)