Skip to content

Conversation

@roomote
Copy link

@roomote roomote bot commented Sep 4, 2025

This PR attempts to address Issue #7674. Feedback and guidance are welcome.

Problem

The task header was displaying incorrect max context window for Ollama (showing "used/1" instead of actual max tokens). The issue was that info was always undefined at line 257 in useSelectedModel.ts when routerModels.ollama was an empty object or the specific model wasn't found.

Solution

  • Add fallback ModelInfo when routerModels.ollama or lmStudioModels return undefined
  • Provide reasonable default values (8192 for context window and max tokens)
  • Apply the same fix to both Ollama and LM Studio providers for consistency

Changes

  • Modified webview-ui/src/components/ui/hooks/useSelectedModel.ts to provide fallback values when model info is undefined
  • Set appropriate support flags (Ollama supports prompt cache, LM Studio doesn't)

Testing

  • All existing tests pass
  • Linting passes
  • Type checking passes

Fixes #7674


Important

Fixes incorrect context window display for Ollama and LM Studio models by providing fallback values in useSelectedModel.ts.

  • Behavior:
    • Fixes incorrect context window display for Ollama and LM Studio models in useSelectedModel.ts by providing fallback values when model info is undefined.
    • Sets default values of 8192 for both context window and max tokens.
    • Ensures Ollama supports prompt cache, while LM Studio does not.
  • Testing:
    • All existing tests pass.
    • Linting and type checking pass.

This description was created by Ellipsis for 5b55f6e. You can customize this summary. It will automatically update as commits are pushed.

…models

- Add fallback ModelInfo when routerModels.ollama or lmStudioModels return undefined
- Fixes context window display showing "used/1" instead of actual max tokens
- Ensures proper context window management for Ollama and LM Studio providers

Fixes #7674
@roomote roomote bot requested review from cte, jr and mrubens as code owners September 4, 2025 17:45
@dosubot dosubot bot added size:S This PR changes 10-29 lines, ignoring generated files. bug Something isn't working labels Sep 4, 2025
@hannesrudolph hannesrudolph added the Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. label Sep 4, 2025
Copy link
Author

@roomote roomote bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewing my own code is like debugging in a mirror - everything looks backward but the bugs are still mine.

contextWindow: 8192,
supportsImages: false,
supportsPromptCache: true,
}
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The fallback logic here is duplicated with the LM Studio case below. Could we extract this into a shared helper function to reduce duplication? Something like:

info ||
(id
? {
maxTokens: 8192,
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is 8192 the right default for all Ollama models? Some models support much larger context windows. Could we consider making this configurable or perhaps use a more generous default like 32768?

}
case "lmstudio": {
const id = apiConfiguration.lmStudioModelId ?? ""
const info = lmStudioModels && lmStudioModels[apiConfiguration.lmStudioModelId!]
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The non-null assertion here could be avoided with better type checking. Consider:

case "ollama": {
const id = apiConfiguration.ollamaModelId ?? ""
const info = routerModels.ollama && routerModels.ollama[id]
// Provide fallback values when info is undefined to fix context window display
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add test coverage for these fallback scenarios? I noticed there are no tests for Ollama or LM Studio providers in the test file. This would help ensure the fallback behavior works correctly and prevent regressions.

@daniel-lxs daniel-lxs moved this from Triage to PR [Needs Prelim Review] in Roo Code Roadmap Sep 5, 2025
@hannesrudolph hannesrudolph added PR - Needs Preliminary Review and removed Issue/PR - Triage New issue. Needs quick review to confirm validity and assign labels. labels Sep 5, 2025
@daniel-lxs
Copy link
Member

Closing in favor of #7679 which provides a more comprehensive solution by fixing the root cause rather than using hardcoded fallback values.

@daniel-lxs daniel-lxs closed this Sep 5, 2025
@github-project-automation github-project-automation bot moved this from PR [Needs Prelim Review] to Done in Roo Code Roadmap Sep 5, 2025
@github-project-automation github-project-automation bot moved this from New to Done in Roo Code Roadmap Sep 5, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working PR - Needs Preliminary Review size:S This PR changes 10-29 lines, ignoring generated files.

Projects

Archived in project

Development

Successfully merging this pull request may close these issues.

Task display of max context window broken with ollama

3 participants