-
-
Notifications
You must be signed in to change notification settings - Fork 6.9k
docs: add Quick Install section for litellm --setup wizard #23905
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -5,11 +5,76 @@ import TabItem from '@theme/TabItem'; | |
| # Getting Started Tutorial | ||
|
|
||
| End-to-End tutorial for LiteLLM Proxy to: | ||
| - Add an Azure OpenAI model | ||
| - Make a successful /chat/completion call | ||
| - Generate a virtual key | ||
| - Set RPM limit on virtual key | ||
| - Add an Azure OpenAI model | ||
| - Make a successful /chat/completion call | ||
| - Generate a virtual key | ||
| - Set RPM limit on virtual key | ||
|
|
||
| ## Quick Install (Recommended for local / beginners) | ||
|
|
||
| New to LiteLLM? This is the easiest way to get started locally. One command installs LiteLLM and walks you through setup interactively — no config files to write by hand. | ||
|
|
||
| ### 1. Install | ||
|
|
||
| ```bash | ||
| curl -fsSL https://raw.githubusercontent.com/BerriAI/litellm/main/scripts/install.sh | sh | ||
| ``` | ||
|
|
||
| This detects your OS, installs `litellm[proxy]`, and drops you straight into the setup wizard. | ||
|
|
||
| ### 2. Follow the wizard | ||
|
|
||
| ``` | ||
| $ litellm --setup | ||
|
|
||
| Welcome to LiteLLM | ||
|
|
||
| Choose your LLM providers | ||
| ○ 1. OpenAI GPT-4o, GPT-4o-mini, o1 | ||
| ○ 2. Anthropic Claude Opus, Sonnet, Haiku | ||
| ○ 3. Azure OpenAI GPT-4o via Azure | ||
| ○ 4. Google Gemini Gemini 2.0 Flash, 1.5 Pro | ||
| ○ 5. AWS Bedrock Claude, Llama via AWS | ||
| ○ 6. Ollama Local models | ||
|
|
||
| ❯ Provider(s): 1,2 | ||
|
|
||
| ❯ OpenAI API key: sk-... | ||
| ❯ Anthropic API key: sk-ant-... | ||
|
|
||
| ❯ Port [4000]: | ||
| ❯ Master key [auto-generate]: | ||
|
|
||
| ✔ Config saved → ./litellm_config.yaml | ||
|
|
||
| ❯ Start the proxy now? (Y/n): | ||
|
Comment on lines
+46
to
+50
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The wizard TUI shows Consider adding a sentence after the wizard block, e.g.: "Your master key is printed at the end of the wizard and also stored in |
||
| ``` | ||
|
Comment on lines
+28
to
+51
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The This confirms the documentation is being merged ahead of the actual feature from PR #23644. Users who follow this guide will hit an immediate failure at step 2. |
||
|
|
||
| The wizard walks you through: | ||
| 1. Pick your LLM providers (OpenAI, Anthropic, Azure, Bedrock, Gemini, Ollama) | ||
| 2. Enter API keys for each provider | ||
| 3. Set a port and master key (or accept the defaults) | ||
| 4. Config is saved to `./litellm_config.yaml` and the proxy starts immediately | ||
|
|
||
| ### 3. Make a call | ||
|
|
||
| Your proxy is running on `http://0.0.0.0:4000`. Test it: | ||
|
|
||
| ```bash | ||
| curl -X POST 'http://0.0.0.0:4000/chat/completions' \ | ||
| -H 'Content-Type: application/json' \ | ||
| -H 'Authorization: Bearer <your-master-key>' \ | ||
| -d '{ | ||
| "model": "gpt-4o", | ||
| "messages": [{"role": "user", "content": "Hello!"}] | ||
| }' | ||
| ``` | ||
|
Comment on lines
+64
to
+71
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
The wizard demo in Step 2 shows the user selecting providers Consider adding a note that the model name must match a provider you configured in the wizard, or change the example to reflect the wizard's demo output more explicitly (e.g. |
||
|
|
||
| :::tip Already have pip installed? | ||
| You can skip the curl install and run `litellm --setup` directly after `pip install 'litellm[proxy]'`. | ||
| ::: | ||
|
|
||
| --- | ||
|
|
||
| ## Pre-Requisites | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
install.shscript does not exist in the repositoryThe curl command references
https://raw.githubusercontent.com/BerriAI/litellm/main/scripts/install.sh, but noscripts/install.shfile exists anywhere in this repository. Running this command as written will produce a 404 error, likely causing a silent failure or a confusing error message for users.This documentation is tied to PR #23644, but that feature PR does not appear to have been merged yet — the
scripts/install.shfile is absent and so is the--setupCLI flag (see the next comment). Publishing these docs before the feature lands will lead users through a broken getting-started flow.