Mirror: Dynamic OpenAI compatible model fetching on front page (#5562)#12
Mirror: Dynamic OpenAI compatible model fetching on front page (#5562)#12jeremylongshore wants to merge 5 commits intomainfrom
Conversation
Add support for dynamically fetching and displaying models from OpenAI-compatible providers. - Add openAiModels field to ExtensionState and context - Implement debounced model fetching request in ExtensionStateContext - Update useProviderModels hook to prioritize fetched models for OpenAI providers
📝 WalkthroughWalkthroughThis pull request implements dynamic OpenAI model fetching for the extension. It extends the extension state type with an Changes
Sequence DiagramsequenceDiagram
participant Extension as Extension Host
participant Context as ExtensionStateContext
participant Hook as useProviderModels
participant Backend as API/Backend
Extension->>Context: Initialize with config (apiProvider, baseUrl)
Context->>Context: Hydrate state
activate Context
Context->>Context: Debounced effect triggered<br/>(apiProvider is "openai"?)
alt When OpenAI provider detected
Context->>Backend: requestOpenAiModels(baseUrl, apiKey, headers)
Backend-->>Context: Returns model list (openAiModels)
Context->>Context: Update state with openAiModels
Context-->>Hook: Broadcast updated ExtensionState
else When non-OpenAI provider
Context->>Context: Skip fetch
end
deactivate Context
Hook->>Hook: Read openAiModels from context
Hook->>Hook: getModelsByProvider(provider, routerModels, ..., openAiModels)
alt When openAiModels provided
Hook->>Hook: Map models to sane defaults
Hook->>Hook: Set first model as defaultModel
else When openAiModels empty/missing
Hook->>Hook: Return empty models map
end
Hook-->>Extension: Return mapped models
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
⚔️ Resolve merge conflicts (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @jeremylongshore, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the application's flexibility by introducing dynamic model fetching for OpenAI-compatible providers. Instead of relying on static model lists, the system now retrieves available models directly from the configured API endpoint. This ensures that the user interface always presents the most current model options, improving adaptability to new models and configuration changes without requiring manual updates. Highlights
Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces dynamic model fetching for OpenAI-compatible providers, which is a great feature. The implementation is solid, but I've identified a couple of areas for improvement. My suggestions focus on making the logic more robust by clearing stale model data when the configuration changes and improving the clarity of the code that handles the fetched models. These changes will enhance the user experience and maintainability.
| if (openAiModels) { | ||
| return { | ||
| models: Object.fromEntries(openAiModels.map((model) => [model, openAiModelInfoSaneDefaults])), | ||
| defaultModel: openAiModels[0] || "", | ||
| } | ||
| } |
There was a problem hiding this comment.
The if (openAiModels) check is truthy for an empty array. This makes the logic inside the if block redundant for that case, as it produces the same result as the return statement on line 193. It's better to explicitly check if the array has elements for clarity and to avoid the || "" fallback.
if (openAiModels && openAiModels.length > 0) {
return {
models: Object.fromEntries(openAiModels.map((model) => [model, openAiModelInfoSaneDefaults])),
defaultModel: openAiModels[0],
}
}| useDebounce( | ||
| () => { | ||
| if (!didHydrateState) { | ||
| return | ||
| } | ||
|
|
||
| const { apiProvider, openAiBaseUrl, openAiApiKey, openAiHeaders } = state.apiConfiguration || {} | ||
|
|
||
| if (apiProvider === "openai" || apiProvider === "openai-responses") { | ||
| if (openAiBaseUrl) { | ||
| vscode.postMessage({ | ||
| type: "requestOpenAiModels", | ||
| values: { | ||
| baseUrl: openAiBaseUrl, | ||
| apiKey: openAiApiKey, | ||
| openAiHeaders, | ||
| }, | ||
| }) | ||
| } | ||
| } | ||
| }, | ||
| 500, | ||
| [ | ||
| didHydrateState, | ||
| state.apiConfiguration?.apiProvider, | ||
| state.apiConfiguration?.openAiBaseUrl, | ||
| state.apiConfiguration?.openAiApiKey, | ||
| state.apiConfiguration?.openAiHeaders, | ||
| ], | ||
| ) |
There was a problem hiding this comment.
The current implementation of the useDebounce hook for fetching OpenAI models doesn't clear the openAiModels state when the user switches to a different provider or removes the openAiBaseUrl. This can lead to stale data being displayed in the UI, which can be confusing. To improve the user experience, the model list should be cleared in these scenarios.
useDebounce(
() => {
if (!didHydrateState) {
return
}
const { apiProvider, openAiBaseUrl, openAiApiKey, openAiHeaders } = state.apiConfiguration || {}
if (apiProvider === "openai" || apiProvider === "openai-responses") {
if (openAiBaseUrl) {
vscode.postMessage({
type: "requestOpenAiModels",
values: {
baseUrl: openAiBaseUrl,
apiKey: openAiApiKey,
openAiHeaders,
},
})
} else if (state.openAiModels?.length) {
// Clear models if base URL is removed for the relevant provider
setState((prevState) => ({ ...prevState, openAiModels: [] }))
}
} else if (state.openAiModels?.length) {
// Clear models if provider is not OpenAI-compatible
setState((prevState) => ({ ...prevState, openAiModels: [] }))
}
},
500,
[
didHydrateState,
state.apiConfiguration?.apiProvider,
state.apiConfiguration?.openAiBaseUrl,
state.apiConfiguration?.openAiApiKey,
state.apiConfiguration?.openAiHeaders,
],
)
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In
`@webview-ui/src/components/kilocode/hooks/__tests__/dynamic-openai-models.spec.ts`:
- Around line 45-57: getModelsByProvider is missing handling for the
"openai-responses" provider so dynamically fetched models passed via
openAiModels get discarded; add a case "openai-responses" in getModelsByProvider
that mirrors the existing "openai" branch (using the openAiModels argument to
populate result.models and result.defaultModel with the same defaults like
contextWindow and supportsImages) or alternatively remove "openai-responses"
from the fetch trigger in ExtensionStateContext so both locations stay
consistent; update getModelsByProvider (and ensure any tests referencing
getModelsByProvider still pass) to reference the openAiModels parameter when
apiProvider === "openai-responses".
🧹 Nitpick comments (2)
webview-ui/src/context/ExtensionStateContext.tsx (1)
564-594: Consider clearingopenAiModelswhen switching away from OpenAI provider.When the user switches from an OpenAI provider to another provider, the previously fetched
openAiModelsremain in state. While this is harmless for the current implementation (sincegetModelsByProvideronly usesopenAiModelsfor the"openai"case), it could lead to stale data if the user switches back before a new fetch completes.♻️ Optional: Clear models when provider changes away from OpenAI
useDebounce( () => { if (!didHydrateState) { return } const { apiProvider, openAiBaseUrl, openAiApiKey, openAiHeaders } = state.apiConfiguration || {} if (apiProvider === "openai" || apiProvider === "openai-responses") { if (openAiBaseUrl) { vscode.postMessage({ type: "requestOpenAiModels", values: { baseUrl: openAiBaseUrl, apiKey: openAiApiKey, openAiHeaders, }, }) } + } else { + setState((prev) => ({ ...prev, openAiModels: [] })) } }, 500,webview-ui/src/components/kilocode/hooks/useProviderModels.ts (1)
186-197: Truthy check onopenAiModelstreats empty arrays as "provided".
if (openAiModels)is truthy for[], so an empty array returns{ models: {}, defaultModel: "" }— which happens to match the fallback. This works but is semantically ambiguous: an empty fetch result and "not yet fetched" produce the same output. If you ever need to distinguish these states (e.g., to show a loading indicator), consider usingopenAiModels?.lengthorundefinedvs[].For now this is fine since the tests confirm the expected behavior.
| it("returns dynamically fetched models when openAiModels is provided", () => { | ||
| const result = getModelsByProvider({ | ||
| ...baseArgs, | ||
| provider: "openai", | ||
| openAiModels: ["gpt-4o", "gpt-4o-mini", "o1-preview"], | ||
| }) | ||
|
|
||
| expect(Object.keys(result.models)).toEqual(["gpt-4o", "gpt-4o-mini", "o1-preview"]) | ||
| expect(result.defaultModel).toBe("gpt-4o") | ||
| // Each model should have sane defaults (128K context, supports images) | ||
| expect(result.models["gpt-4o"].contextWindow).toBe(128_000) | ||
| expect(result.models["gpt-4o"].supportsImages).toBe(true) | ||
| }) |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Check if openai-responses is handled in getModelsByProvider
rg -n 'openai-responses' webview-ui/src/components/kilocode/hooks/useProviderModels.tsRepository: jeremylongshore/kilocode
Length of output: 50
🏁 Script executed:
# Also search for openai-responses more broadly to understand its usage
rg -n 'openai-responses' webview-ui/src/components/kilocode/hooks/ -A 2 -B 2Repository: jeremylongshore/kilocode
Length of output: 678
🏁 Script executed:
# Check the switch statement in getModelsByProvider to see all cases
fd 'useProviderModels.ts' webview-ui/src/components/kilocode/hooks/ --exec cat -n {} \;Repository: jeremylongshore/kilocode
Length of output: 11631
🏁 Script executed:
# Check the ExtensionStateContext to see where model fetching is triggered
rg -n 'ExtensionStateContext' webview-ui/src/components/kilocode/ -lRepository: jeremylongshore/kilocode
Length of output: 1031
🏁 Script executed:
# Search for ExtensionStateContext to find where model fetching is triggered
rg -n 'openai-responses' webview-ui/src/context/ -A 3 -B 3Repository: jeremylongshore/kilocode
Length of output: 699
🏁 Script executed:
# Also check the ExtensionStateContext file itself for model fetching logic
fd 'ExtensionStateContext' webview-ui/src/ --exec cat -n {} \; | head -600Repository: jeremylongshore/kilocode
Length of output: 29014
🏁 Script executed:
# Search more broadly for where openAiModels is fetched or populated
rg -n 'openAiModels' webview-ui/src/context/ -A 2 -B 2Repository: jeremylongshore/kilocode
Length of output: 1261
🏁 Script executed:
# Check if openai-responses is used as a provider anywhere in the codebase
rg -n '"openai-responses"' webview-ui/src --type ts --type tsx -B 2 -A 2Repository: jeremylongshore/kilocode
Length of output: 95
🏁 Script executed:
# Also check if there's any mention of openai-responses in provider type definitions
rg -n 'openai-responses' --type ts --type tsx -lRepository: jeremylongshore/kilocode
Length of output: 95
🏁 Script executed:
# Check if openai-responses is used as a provider anywhere in the codebase
rg -n '"openai-responses"' webview-ui/src -B 2 -A 2Repository: jeremylongshore/kilocode
Length of output: 3569
🏁 Script executed:
# Check if openai-responses is defined in type definitions
fd 'types' webview-ui/src -type d | head -5Repository: jeremylongshore/kilocode
Length of output: 238
🏁 Script executed:
# Search for ProviderName type definition to see all valid providers
rg -n 'type ProviderName' webview-ui/src -A 30Repository: jeremylongshore/kilocode
Length of output: 14398
🏁 Script executed:
# Check the complete case for openai-responses in useSelectedModel.ts
sed -n '349,365p' webview-ui/src/components/ui/hooks/useSelectedModel.tsRepository: jeremylongshore/kilocode
Length of output: 817
🏁 Script executed:
# Check if openAiModels state is used anywhere in useSelectedModel or elsewhere
rg -n 'openAiModels' webview-ui/src/components -B 2 -A 2Repository: jeremylongshore/kilocode
Length of output: 6925
Add missing case for openai-responses in getModelsByProvider.
The ExtensionStateContext (line 573) triggers model fetching for both "openai" and "openai-responses" providers, but getModelsByProvider only has a case "openai" branch (lines 186-197). When apiProvider is "openai-responses", the function falls through to the default case and returns empty models and defaultModel, silently discarding the dynamically fetched models. Either add a matching case for "openai-responses" that handles the openAiModels parameter, or remove "openai-responses" from the fetch trigger in ExtensionStateContext.
🤖 Prompt for AI Agents
In
`@webview-ui/src/components/kilocode/hooks/__tests__/dynamic-openai-models.spec.ts`
around lines 45 - 57, getModelsByProvider is missing handling for the
"openai-responses" provider so dynamically fetched models passed via
openAiModels get discarded; add a case "openai-responses" in getModelsByProvider
that mirrors the existing "openai" branch (using the openAiModels argument to
populate result.models and result.defaultModel with the same defaults like
contextWindow and supportsImages) or alternatively remove "openai-responses"
from the fetch trigger in ExtensionStateContext so both locations stay
consistent; update getModelsByProvider (and ensure any tests referencing
getModelsByProvider still pass) to reference the openAiModels parameter when
apiProvider === "openai-responses".
4231140 to
fa13626
Compare
|
Content pushed directly to main — merge conflict resolved. |
Review Summary
Checklist
Findings1.
|
| Test | Command | Result | Details |
|---|---|---|---|
| TypeScript | pnpm check-types |
PASS | 22/22 packages |
| Lint | pnpm lint |
PASS | 18/18 packages |
| Unit Tests | pnpm test --continue |
PASS | 7,803 passed, 85 skipped (pre-existing failures: core-schemas, agent-runtime) |
Behavioral (new targeted tests)
| Test Case | Expected | Result |
|---|---|---|
openAiModels: ["gpt-4o", "gpt-4o-mini", "o1-preview"] |
Models returned with sane defaults (128K context, supports images) | PASS |
No openAiModels provided |
Empty models, empty default | PASS |
openAiModels: [] (empty array) |
Empty models, empty default | PASS |
Test file: dynamic-openai-models.spec.ts
Tested on fork branch
review/PR-5562
Code Analysis
Architecture
This PR connects the front page model selector to an existing backend API (requestOpenAiModels) that was already used by the settings page. Clean layering:
ExtensionStateContext.tsx— debounced message sender (triggers on config change)webviewMessageHandler.ts(pre-existing) — fetches from/v1/modelsendpointuseProviderModels.ts— consumesopenAiModelsfrom state, converts toModelRecordvscode-extension-host.ts— addsopenAiModels?: string[]toExtensionStatetype
Key Design Decisions
useDebounce(500ms): Prevents API spam during rapid config changes. Appropriate for text input fields.openAiModelInfoSaneDefaults: Uses 128K context, supports images, $0 pricing. Reasonable for unknown models.- State via message channel: Models are fetched on demand and stored in React state, not persisted to disk. Re-fetches on startup via the
didHydrateStatetrigger.
CI Status
| Check | Result |
|---|---|
| Upstream CI | PASS (11/11 checks) |
| Merge status | CONFLICTING (needs rebase) |
| Fork CI | PR #12 |
| Local verification | PASS (all tests) |
Verdict
APPROVE — Well-scoped feature that connects the front page to an existing backend API for dynamic model fetching. The openai-responses mismatch is non-blocking and the merge conflict needs resolution by the author. Code follows established patterns and passes all tests.
Review Journal: kilocode Kilo-Org#5562
SummaryAdds dynamic model fetching for the OpenAI provider on the front page. When users configure an OpenAI-compatible base URL, the extension now automatically queries the What ChangedThe PR connects the front page model selector to an existing backend API that was already used by the settings page. The architecture is clean — four files touched, each with a clear role:
The backend handler ( VerificationRegression (did we break anything?)
Behavioral (does the feature actually work?)We wrote 3 targeted tests to verify the
Test file: Tested on fork branch Lessons Learned
Review #19 of 75 | Multi-AI analysis | Methodology |
Mirror of Kilo-Org#5562
This PR mirrors the upstream change for multi-AI review analysis.
Changes
Adds dynamic model fetching for OpenAI-compatible providers. When base URL and API key are configured, the extension fetches available models from the provider's
/v1/modelsendpoint and displays them in the model dropdown.Bot Review Tracker
Links
Summary by CodeRabbit
New Features
Tests
Chores