Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
128 changes: 0 additions & 128 deletions docs/my-website/blog/video_characters_litellm/index.md

This file was deleted.

75 changes: 0 additions & 75 deletions docs/my-website/docs/providers/openai/videos.md
Original file line number Diff line number Diff line change
Expand Up @@ -135,81 +135,6 @@ curl --location --request POST 'http://localhost:4000/v1/videos/video_id/remix'
}'
```

### Character, Edit, and Extension Routes

OpenAI video routes supported by LiteLLM proxy:

- `POST /v1/videos/characters`
- `GET /v1/videos/characters/{character_id}`
- `POST /v1/videos/edits`
- `POST /v1/videos/extensions`

#### `target_model_names` support on character creation

`POST /v1/videos/characters` supports `target_model_names` for model-based routing (same behavior as video create).

```bash
curl --location 'http://localhost:4000/v1/videos/characters' \
--header 'Authorization: Bearer sk-1234' \
-F 'name=hero' \
-F 'target_model_names=gpt-4' \
-F 'video=@/path/to/character.mp4'
```

When `target_model_names` is used, LiteLLM returns an encoded character ID:

```json
{
"id": "character_...",
"object": "character",
"created_at": 1712697600,
"name": "hero"
}
```

Use that encoded ID directly on get:

```bash
curl --location 'http://localhost:4000/v1/videos/characters/character_...' \
--header 'Authorization: Bearer sk-1234'
```

#### Encoded and non-encoded video IDs for edit/extension

Both routes accept either plain or encoded `video.id`:

- `POST /v1/videos/edits`
- `POST /v1/videos/extensions`

```bash
curl --location 'http://localhost:4000/v1/videos/edits' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data '{
"prompt": "Make this brighter",
"video": { "id": "video_..." }
}'
```

```bash
curl --location 'http://localhost:4000/v1/videos/extensions' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data '{
"prompt": "Continue this scene",
"seconds": "4",
"video": { "id": "video_..." }
}'
```

#### `custom_llm_provider` input sources

For these routes, `custom_llm_provider` may be supplied via:

- header: `custom-llm-provider`
- query: `?custom_llm_provider=...`
- body: `custom_llm_provider` (and `extra_body.custom_llm_provider` where supported)

Test OpenAI video generation request

```bash
Expand Down
76 changes: 0 additions & 76 deletions docs/my-website/docs/videos.md
Original file line number Diff line number Diff line change
Expand Up @@ -290,82 +290,6 @@ curl --location 'http://localhost:4000/v1/videos' \
--header 'custom-llm-provider: azure'
```

### Character, Edit, and Extension Endpoints

LiteLLM proxy also supports these OpenAI-compatible video routes:

- `POST /v1/videos/characters`
- `GET /v1/videos/characters/{character_id}`
- `POST /v1/videos/edits`
- `POST /v1/videos/extensions`

#### Routing Behavior (`target_model_names`, encoded IDs, and provider overrides)

- `POST /v1/videos/characters` supports `target_model_names` like `POST /v1/videos`.
- When `target_model_names` is provided on character creation, LiteLLM encodes the returned `character_id` with routing metadata.
- `GET /v1/videos/characters/{character_id}` accepts encoded character IDs directly. LiteLLM decodes the ID internally and routes with the correct model/provider metadata.
- `POST /v1/videos/edits` and `POST /v1/videos/extensions` support both:
- plain `video.id`
- encoded `video.id` values returned by LiteLLM
- `custom_llm_provider` can be supplied using the same patterns as other proxy endpoints:
- header: `custom-llm-provider`
- query: `?custom_llm_provider=...`
- body: `custom_llm_provider` (or `extra_body.custom_llm_provider` where applicable)

#### Character create with `target_model_names`

```bash
curl --location 'http://localhost:4000/v1/videos/characters' \
--header 'Authorization: Bearer sk-1234' \
-F 'name=hero' \
-F 'target_model_names=gpt-4' \
-F 'video=@/path/to/character.mp4'
```

Example response (encoded `id`):

```json
{
"id": "character_...",
"object": "character",
"created_at": 1712697600,
"name": "hero"
}
```

#### Get character using encoded `character_id`

```bash
curl --location 'http://localhost:4000/v1/videos/characters/character_...' \
--header 'Authorization: Bearer sk-1234'
```

#### Video edit with encoded `video.id`

```bash
curl --location 'http://localhost:4000/v1/videos/edits' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data '{
"prompt": "Make this brighter",
"video": { "id": "video_..." }
}'
```

#### Video extension with provider override from `extra_body`

```bash
curl --location 'http://localhost:4000/v1/videos/extensions' \
--header 'Authorization: Bearer sk-1234' \
--header 'Content-Type: application/json' \
--data '{
"prompt": "Continue this scene",
"seconds": "4",
"video": { "id": "video_..." },
"extra_body": { "custom_llm_provider": "openai" }
}'
```

Test Azure video generation request

```bash
Expand Down
18 changes: 1 addition & 17 deletions enterprise/litellm_enterprise/proxy/hooks/managed_files.py
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,6 @@
get_batch_id_from_unified_batch_id,
get_content_type_from_file_object,
get_model_id_from_unified_batch_id,
get_models_from_unified_file_id,
normalize_mime_type_for_provider,
)
from litellm.types.llms.openai import (
Expand Down Expand Up @@ -905,21 +904,6 @@ async def async_post_call_success_hook(
) # managed batch id
model_id = cast(Optional[str], response._hidden_params.get("model_id"))
model_name = cast(Optional[str], response._hidden_params.get("model_name"))
resolved_model_name = model_name

# Some providers (e.g. Vertex batch retrieve) do not set model_name on
# the response. In that case, recover target_model_names from the input
# managed file metadata so unified output IDs preserve routing metadata.
if not resolved_model_name and isinstance(unified_file_id, str):
decoded_unified_file_id = (
_is_base64_encoded_unified_file_id(unified_file_id)
or unified_file_id
)
target_model_names = get_models_from_unified_file_id(
decoded_unified_file_id
)
if target_model_names:
resolved_model_name = ",".join(target_model_names)
original_response_id = response.id

if (unified_batch_id or unified_file_id) and model_id:
Expand All @@ -935,7 +919,7 @@ async def async_post_call_success_hook(
unified_file_id = self.get_unified_output_file_id(
output_file_id=original_file_id,
model_id=model_id,
model_name=resolved_model_name,
model_name=model_name,
)
setattr(response, file_attr, unified_file_id)

Expand Down
36 changes: 0 additions & 36 deletions litellm/anthropic_beta_headers_manager.py
Original file line number Diff line number Diff line change
Expand Up @@ -367,42 +367,6 @@ def update_headers_with_filtered_beta(
return headers


def update_request_with_filtered_beta(
headers: dict,
request_data: dict,
provider: str,
) -> tuple[dict, dict]:
"""
Update both headers and request body beta fields based on provider support.
Modifies both dicts in place and returns them.

Args:
headers: Request headers dict (will be modified in place)
request_data: Request body dict (will be modified in place)
provider: Provider name

Returns:
Tuple of (updated headers, updated request_data)
"""
headers = update_headers_with_filtered_beta(headers=headers, provider=provider)

existing_body_betas = request_data.get("anthropic_beta")
if not existing_body_betas:
return headers, request_data

filtered_body_betas = filter_and_transform_beta_headers(
beta_headers=existing_body_betas,
provider=provider,
)

if filtered_body_betas:
request_data["anthropic_beta"] = filtered_body_betas
else:
request_data.pop("anthropic_beta", None)

return headers, request_data


def get_unsupported_headers(provider: str) -> List[str]:
"""
Get all beta headers that are unsupported by a provider (have null values in mapping).
Expand Down
8 changes: 4 additions & 4 deletions litellm/blog_posts.json
Original file line number Diff line number Diff line change
@@ -1,10 +1,10 @@
{
"posts": [
{
"title": "Realtime WebRTC HTTP Endpoints",
"description": "Use the LiteLLM proxy to route OpenAI-style WebRTC realtime via HTTP: client_secrets and SDP exchange.",
"date": "2026-03-12",
"url": "https://docs.litellm.ai/blog/realtime_webrtc_http_endpoints"
"title": "Incident Report: SERVER_ROOT_PATH regression broke UI routing",
"description": "How a single line removal caused UI 404s for all deployments using SERVER_ROOT_PATH, and the tests we added to prevent it from happening again.",
"date": "2026-02-21",
"url": "https://docs.litellm.ai/blog/server-root-path-incident"
}
]
}
Loading