feat(opencode): add LiteLLM provider with auto model discovery#14468
feat(opencode): add LiteLLM provider with auto model discovery#14468balcsida wants to merge 19 commits intoanomalyco:devfrom
Conversation
|
The following comment was made by an LLM, it may be inaccurate: Based on my search, I found two related PRs that are NOT duplicates but are closely related:
These PRs appear to be related work in the same area but are separate features. PR #14468 (the current PR) is a distinct, more comprehensive implementation of the LiteLLM provider with auto-discovery, and it references these earlier related PRs rather than duplicating them. No duplicate PRs found. |
d4a5701 to
1e2b913
Compare
54cedd2 to
d3e6960
Compare
|
lgtm, but test fails. check e2e. this test fails: Did you try testing locally? bun test:e2e:local |
|
@alexyaroshuk, yes, sorry, fixed all the tests |
bf8b1e8 to
7dc5277
Compare
|
👍 |
|
@alexyaroshuk, may I have your blessing on this? 🥺 |
|
@adamdotdevin can this one get a look? |
|
|
|
@alexyaroshuk @adamdotdevin any luck to get it merged soon ? |
|
@balcsida please rebase, there are some conflicts now. @alexyaroshuk @adamdotdevin could you review please? This will make so many opencode users in enterprise world very happy. Thanks! |
b232351 to
1ef13b8
Compare
|
Hey @kkugot Thanks for the ping, I rebased the branch and fixed the conflicts, but dev currently fails, so tests for this PR will fail as well. While I understand how this feature is a very nice to have in an enterprise environment (it is exactly the same reason why I sent this PR - we would like to use this as well), please kindly don't spam alexyaroshuk and adamdotdevin - I would rather not like put additional pressure on them. Kind regards, |
34ad110 to
3d73899
Compare
3d73899 to
0988ce8
Compare
Issue for this PR
Closes #13891
Type of change
What does this PR do?
Adds a native
litellmprovider that auto-discovers models from a LiteLLM proxy at startup. Previously users had to manually define every model inopencode.json— now settingLITELLM_API_KEYandLITELLM_HOSTis enough.Discovery: Fetches
/model/infofor rich metadata (pricing, limits, capabilities). Falls back to the standard/modelsendpoint for older LiteLLM versions or non-LiteLLM OpenAI-compatible proxies.What gets mapped from
/model/info:supported_openai_paramsReasoning transforms: Claude models behind LiteLLM get
thinkingbudget variants, other models getreasoningEffort. This prevents false-positive reasoning param injection for aliased models (e.g.o3-custom→ Mistral).Files changed:
litellm.ts(new): Discovery module — fetches, parses, maps model metadataprovider.ts: Seedslitellmprovider from env vars; custom loader calls discovery and injects models (user config takes precedence)transform.ts: LiteLLM-specific reasoning variant logicllm.ts: Adds explicitproviderID === "litellm"to proxy detectionLITELLM_API_KEYLITELLM_HOSThttp://localhost:4000LITELLM_BASE_URLLITELLM_HOSTLITELLM_CUSTOM_HEADERS{}LITELLM_TIMEOUT5000Builds on ideas from #13896 (fallback endpoint, over-200k pricing, temperature detection) and #14277 (clean fallback chain). Supersedes my earlier #14202 which was auto-closed for not following the PR template.
How did you verify your code works?
LITELLM_API_KEY+LITELLM_HOSTpointing to a running LiteLLM proxyopencode— LiteLLM models appear in model pickero3-custom→ Mistral) — reasoning params NOT injectedthinkingvariants work/model/info) — falls back to/modelswith sensible defaultsScreenshots / recordings
N/A — no UI changes
Checklist