Skip to content
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 69 additions & 4 deletions docs/my-website/docs/proxy/docker_quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,76 @@ import TabItem from '@theme/TabItem';
# Getting Started Tutorial

End-to-End tutorial for LiteLLM Proxy to:
- Add an Azure OpenAI model
- Make a successful /chat/completion call
- Generate a virtual key
- Set RPM limit on virtual key
- Add an Azure OpenAI model
- Make a successful /chat/completion call
- Generate a virtual key
- Set RPM limit on virtual key

## Quick Install (Recommended for local / beginners)

New to LiteLLM? This is the easiest way to get started locally. One command installs LiteLLM and walks you through setup interactively — no config files to write by hand.

### 1. Install

```bash
curl -fsSL https://raw.githubusercontent.com/BerriAI/litellm/main/scripts/install.sh | sh
```
Comment on lines +20 to +21
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P0 install.sh script does not exist in the repository

The curl command references https://raw.githubusercontent.com/BerriAI/litellm/main/scripts/install.sh, but no scripts/install.sh file exists anywhere in this repository. Running this command as written will produce a 404 error, likely causing a silent failure or a confusing error message for users.

This documentation is tied to PR #23644, but that feature PR does not appear to have been merged yet — the scripts/install.sh file is absent and so is the --setup CLI flag (see the next comment). Publishing these docs before the feature lands will lead users through a broken getting-started flow.


This detects your OS, installs `litellm[proxy]`, and drops you straight into the setup wizard.

### 2. Follow the wizard

```
$ litellm --setup

Welcome to LiteLLM

Choose your LLM providers
○ 1. OpenAI GPT-4o, GPT-4o-mini, o1
○ 2. Anthropic Claude Opus, Sonnet, Haiku
○ 3. Azure OpenAI GPT-4o via Azure
○ 4. Google Gemini Gemini 2.0 Flash, 1.5 Pro
○ 5. AWS Bedrock Claude, Llama via AWS
○ 6. Ollama Local models

❯ Provider(s): 1,2

❯ OpenAI API key: sk-...
❯ Anthropic API key: sk-ant-...

❯ Port [4000]:
❯ Master key [auto-generate]:

✔ Config saved → ./litellm_config.yaml

❯ Start the proxy now? (Y/n):
Comment on lines +46 to +50
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Auto-generated master key not shown to the user

The wizard TUI shows ❯ Master key [auto-generate]: and then the Step 3 curl uses Bearer <your-master-key> as a placeholder, but there is no guidance on where to find the auto-generated key after the wizard completes. Users who accepted the default will not know what value to substitute.

Consider adding a sentence after the wizard block, e.g.: "Your master key is printed at the end of the wizard and also stored in ./litellm_config.yaml under general_settings.master_key."

```
Comment on lines +28 to +51
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P0 litellm --setup flag is not implemented

The --setup CLI flag shown in this wizard demo does not exist in the codebase. Searching litellm/proxy/proxy_cli.py and every other Python file in the repo shows no --setup argument registered anywhere. Running litellm --setup will produce an "unrecognized arguments" error.

This confirms the documentation is being merged ahead of the actual feature from PR #23644. Users who follow this guide will hit an immediate failure at step 2.


The wizard walks you through:
1. Pick your LLM providers (OpenAI, Anthropic, Azure, Bedrock, Gemini, Ollama)
2. Enter API keys for each provider
3. Set a port and master key (or accept the defaults)
4. Config is saved to `./litellm_config.yaml` and the proxy starts immediately

### 3. Make a call

Your proxy is running on `http://0.0.0.0:4000`. Test it:

```bash
curl -X POST 'http://0.0.0.0:4000/chat/completions' \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer <your-master-key>' \
-d '{
"model": "gpt-4o",
"messages": [{"role": "user", "content": "Hello!"}]
}'
```
Comment on lines +64 to +71
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Test call hardcodes gpt-4o regardless of provider selection

The wizard demo in Step 2 shows the user selecting providers 1,2 (OpenAI + Anthropic), so the gpt-4o example works for that specific walkthrough. However, users who followed the wizard but only chose providers like Anthropic, Bedrock, or Gemini will immediately hit a model-routing error when they try this call.

Consider adding a note that the model name must match a provider you configured in the wizard, or change the example to reflect the wizard's demo output more explicitly (e.g. "model": "gpt-4o" # use any model you enabled in the wizard).


:::tip Already have pip installed?
You can skip the curl install and run `litellm --setup` directly after `pip install 'litellm[proxy]'`.
:::

---

## Pre-Requisites

Expand Down
Loading