feat(opencode): add local server provider with auto model discovery#17688
feat(opencode): add local server provider with auto model discovery#17688hmblair wants to merge 2 commits intoanomalyco:devfrom
Conversation
Add a custom loader for a "local" provider that auto-discovers models from any OpenAI-compatible local server (llama.cpp, ollama, vLLM, LM Studio, etc.) by querying the standard /v1/models endpoint at startup. Users configure only a baseURL and optional apiKey — no manual model listing required. Discovered models are merged with any manually configured models without overwriting them. Closes anomalyco#6231
|
Hey! Your PR title Please update it to start with one of:
Where See CONTRIBUTING.md for details. |
|
The following comment was made by an LLM, it may be inaccurate: I found a highly relevant duplicate PR: Potential Duplicate:
This PR appears to be addressing the exact same feature — dynamic model discovery for local providers including LM Studio and llama.cpp, which are the core use cases mentioned in PR #17688. Related (but different scope):
You should investigate PR #17670 to determine if it's already addressing this functionality or if there's overlap in approach. |
|
Thanks for updating your PR! It now meets our contributing guidelines. 👍 |
…local provider tests The CUSTOM_LOADERS loop skipped providers not registered in models.dev, which prevented the local provider from ever being invoked. Create a stub Info as a fallback so custom loaders can bootstrap themselves. Adds three tests for the local provider: successful auto-discovery, unreachable endpoint, and missing baseURL.
|
This PR doesn't fully meet our contributing guidelines and PR template. What needs to be fixed:
Please edit this PR description to address the above within 2 hours, or it will be automatically closed. If you believe this was flagged incorrectly, please let a maintainer know. |
|
This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window. Feel free to open a new pull request that follows our guidelines. |
Closes #6231
Type of change
New feature
What it does
Adds a built-in
localprovider that automatically discovers models from any OpenAI-compatible/modelsendpoint. Configure abaseURL, and the provider fetches available models at startup — no manual model listing required.Also fixes the
CUSTOM_LOADERSloop to allow custom loaders that don't have a models.dev entry (previously they were silently skipped).Why this approach
PR #17670 solves the same problem with 700+ lines across 3 files and a new
dynamicModelListconfig flag. This PR does it in ~50 lines with no new config surface. Thelocalprovider is a first-class built-in: it hits/models, registers whatever it finds, and returnsautoload: true. If the endpoint is unreachable or returns nothing useful, it silently returnsautoload: false. Nothing new to learn, nothing to break.How to use it
{ "provider": { "local": { "options": { "baseURL": "http://localhost:11434/v1" } } } }With an API key (e.g. LM Studio):
{ "provider": { "local": { "options": { "baseURL": "http://localhost:1234/v1", "apiKey": "lm-studio" } } } }Tests
Three tests covering successful discovery, unreachable endpoint, and missing baseURL.