Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
24 changes: 23 additions & 1 deletion docs/my-website/docs/image_generation.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ import TabItem from '@theme/TabItem';
| Fallbacks | ✅ | Works between supported models |
| Loadbalancing | ✅ | Works between supported models |
| Guardrails | ✅ | Applies to input prompts (non-streaming only) |
| Supported Providers | OpenAI, Azure, Google AI Studio, Vertex AI, AWS Bedrock, Recraft, Xinference, Nscale | |
| Supported Providers | OpenAI, Azure, Google AI Studio, Vertex AI, AWS Bedrock, Recraft, OpenRouter, Xinference, Nscale | |

## Quick Start

Expand Down Expand Up @@ -238,6 +238,27 @@ print(response)

See Recraft usage with LiteLLM [here](./providers/recraft.md#image-generation)

## OpenRouter Image Generation Models

Use this for image generation models available through OpenRouter (e.g., Google Gemini image generation models)

#### Usage

```python showLineNumbers
from litellm import image_generation
import os

os.environ['OPENROUTER_API_KEY'] = "your-api-key"

response = image_generation(
model="openrouter/google/gemini-2.5-flash-image",
prompt="A beautiful sunset over a calm ocean",
size="1024x1024",
quality="high",
)
print(response)
```

## OpenAI Compatible Image Generation Models
Use this for calling `/image_generation` endpoints on OpenAI Compatible Servers, example https://github.com/xorbitsai/inference

Expand Down Expand Up @@ -301,5 +322,6 @@ print(f"response: {response}")
| Vertex AI | [Vertex AI Image Generation →](./providers/vertex_image) |
| AWS Bedrock | [Bedrock Image Generation →](./providers/bedrock) |
| Recraft | [Recraft Image Generation →](./providers/recraft#image-generation) |
| OpenRouter | [OpenRouter Image Generation →](./providers/openrouter#image-generation) |
| Xinference | [Xinference Image Generation →](./providers/xinference#image-generation) |
| Nscale | [Nscale Image Generation →](./providers/nscale#image-generation) |
117 changes: 117 additions & 0 deletions docs/my-website/docs/providers/openrouter.md
Original file line number Diff line number Diff line change
Expand Up @@ -93,3 +93,120 @@ response = embedding(
)
print(response)
```

## Image Generation

OpenRouter supports image generation through select models like Google Gemini image generation models. LiteLLM transforms standard image generation requests to OpenRouter's chat completion format.

### Supported Parameters

- `size`: Maps to OpenRouter's `aspect_ratio` format
- `1024x1024` → `1:1` (square)
- `1536x1024` → `3:2` (landscape)
- `1024x1536` → `2:3` (portrait)
- `1792x1024` → `16:9` (wide landscape)
- `1024x1792` → `9:16` (tall portrait)

- `quality`: Maps to OpenRouter's `image_size` format (Gemini models)
- `low` or `standard` → `1K`
- `medium` → `2K`
- `high` or `hd` → `4K`

- `n`: Number of images to generate

### Usage

```python
from litellm import image_generation
import os

os.environ["OPENROUTER_API_KEY"] = "your-api-key"

# Basic image generation
response = image_generation(
model="openrouter/google/gemini-2.5-flash-image",
prompt="A beautiful sunset over a calm ocean",
)
print(response)
```

### Advanced Usage with Parameters

```python
from litellm import image_generation
import os

os.environ["OPENROUTER_API_KEY"] = "your-api-key"

# Generate high-quality landscape image
response = image_generation(
model="openrouter/google/gemini-2.5-flash-image",
prompt="A serene mountain landscape with a lake",
size="1536x1024", # Landscape format
quality="high", # High quality (4K)
)

# Access the generated image
image_data = response.data[0]
if image_data.b64_json:
# Base64 encoded image
print(f"Generated base64 image: {image_data.b64_json[:50]}...")
elif image_data.url:
# Image URL
print(f"Generated image URL: {image_data.url}")
```

### Using OpenRouter-Specific Parameters

You can also pass OpenRouter-specific parameters directly using `image_config`:

```python
from litellm import image_generation
import os

os.environ["OPENROUTER_API_KEY"] = "your-api-key"

response = image_generation(
model="openrouter/google/gemini-2.5-flash-image",
prompt="A futuristic cityscape at night",
image_config={
"aspect_ratio": "16:9", # OpenRouter native format
"image_size": "4K" # OpenRouter native format
}
)
print(response)
```

### Response Format

The response follows the standard LiteLLM ImageResponse format:

```python
{
"created": 1703658209,
"data": [{
"b64_json": "iVBORw0KGgoAAAANSUhEUgAA...", # Base64 encoded image
"url": None,
"revised_prompt": None
}],
"usage": {
"input_tokens": 10,
"output_tokens": 1290,
"total_tokens": 1300
}
}
```

### Cost Tracking

OpenRouter provides cost information in the response, which LiteLLM automatically tracks:

```python
response = image_generation(
model="openrouter/google/gemini-2.5-flash-image",
prompt="A cute baby sea otter",
)

# Cost is available in the response metadata
print(f"Request cost: ${response._hidden_params['additional_headers']['llm_provider-x-litellm-response-cost']}")
```
1 change: 1 addition & 0 deletions litellm/images/main.py
Original file line number Diff line number Diff line change
Expand Up @@ -404,6 +404,7 @@ def image_generation( # noqa: PLR0915
litellm.LlmProviders.STABILITY,
litellm.LlmProviders.RUNWAYML,
litellm.LlmProviders.VERTEX_AI,
litellm.LlmProviders.OPENROUTER
):
if image_generation_config is None:
raise ValueError(
Expand Down
13 changes: 13 additions & 0 deletions litellm/llms/openrouter/image_generation/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
from litellm.llms.base_llm.image_generation.transformation import (
BaseImageGenerationConfig,
)

from .transformation import OpenRouterImageGenerationConfig

__all__ = [
"OpenRouterImageGenerationConfig",
]


def get_openrouter_image_generation_config(model: str) -> BaseImageGenerationConfig:
return OpenRouterImageGenerationConfig()
Loading
Loading