-
Couldn't load subscription status.
- Fork 2.3k
fix: provide fallback context window values for Ollama and LM Studio models #7676
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…models - Add fallback ModelInfo when routerModels.ollama or lmStudioModels return undefined - Fixes context window display showing "used/1" instead of actual max tokens - Ensures proper context window management for Ollama and LM Studio providers Fixes #7674
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewing my own code is like debugging in a mirror - everything looks backward but the bugs are still mine.
| contextWindow: 8192, | ||
| supportsImages: false, | ||
| supportsPromptCache: true, | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fallback logic here is duplicated with the LM Studio case below. Could we extract this into a shared helper function to reduce duplication? Something like:
| info || | ||
| (id | ||
| ? { | ||
| maxTokens: 8192, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is 8192 the right default for all Ollama models? Some models support much larger context windows. Could we consider making this configurable or perhaps use a more generous default like 32768?
| } | ||
| case "lmstudio": { | ||
| const id = apiConfiguration.lmStudioModelId ?? "" | ||
| const info = lmStudioModels && lmStudioModels[apiConfiguration.lmStudioModelId!] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The non-null assertion here could be avoided with better type checking. Consider:
| case "ollama": { | ||
| const id = apiConfiguration.ollamaModelId ?? "" | ||
| const info = routerModels.ollama && routerModels.ollama[id] | ||
| // Provide fallback values when info is undefined to fix context window display |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we add test coverage for these fallback scenarios? I noticed there are no tests for Ollama or LM Studio providers in the test file. This would help ensure the fallback behavior works correctly and prevent regressions.
|
Closing in favor of #7679 which provides a more comprehensive solution by fixing the root cause rather than using hardcoded fallback values. |
This PR attempts to address Issue #7674. Feedback and guidance are welcome.
Problem
The task header was displaying incorrect max context window for Ollama (showing "used/1" instead of actual max tokens). The issue was that
infowas always undefined at line 257 inuseSelectedModel.tswhenrouterModels.ollamawas an empty object or the specific model wasn't found.Solution
Changes
webview-ui/src/components/ui/hooks/useSelectedModel.tsto provide fallback values when model info is undefinedTesting
Fixes #7674
Important
Fixes incorrect context window display for Ollama and LM Studio models by providing fallback values in
useSelectedModel.ts.useSelectedModel.tsby providing fallback values when model info is undefined.This description was created by
for 5b55f6e. You can customize this summary. It will automatically update as commits are pushed.