Conversation
- Add Ollama detection utility that queries localhost:11434/api/tags - Auto-register Ollama provider when running locally with detected models - Add --local flag to /models command to show only local models - Enable tool calling for Ollama models via custom loader
- Auto-detect reasoning models (qwen3, phi4, gemma3, llama3, deepseek, qwq, gpt-oss) - Add config support to force reasoning ON/OFF via capabilities.reasoning - Enable interleaved with reasoning_content field for reasoning models - Increase token limits (context: 200k, output: 32k) for reasoning models - Add default reasoningEffort: medium for reasoning models - Add think parameter support for Ollama API (true/false or low/medium/high for GPT-OSS)
- configModel.reasoning instead of configModel.capabilities.reasoning - configModel.interleaved instead of configModel.capabilities.interleaved
Allow variants (low/medium/high) to be generated for Ollama models that were previously blocked by the deepseek/minimax/glm/mistral/kimi check in ProviderTransform.variants()
Reasoning detection was only running for auto-detected Ollama models, but was skipped when Ollama was configured in opencode.json. Now applies reasoning detection to all Ollama models regardless of how they're loaded.
The createOpenAICompatible provider may not have a chat method in newer AI SDK versions. Added fallback to languageModel like other providers.
|
Hey! Your PR title Please update it to start with one of:
Where See CONTRIBUTING.md for details. |
|
This PR doesn't fully meet our contributing guidelines and PR template. What needs to be fixed:
Please edit this PR description to address the above within 2 hours, or it will be automatically closed. If you believe this was flagged incorrectly, please let a maintainer know. |
Ollama only supports thinking for specific models: DeepSeek R1, DeepSeek v3.1, Qwen 3, and GPT-OSS. Other models like phi4 and gemma3 don't support the think parameter and will error if it's sent.
|
The following comment was made by an LLM, it may be inaccurate: Potential Duplicate/Related PRs Found:
I recommend reviewing PRs #11951 and #10758 first, as they appear most directly related to local Ollama model detection and implementation. |
|
This pull request has been automatically closed because it was not updated to meet our contributing guidelines within the 2-hour window. Feel free to open a new pull request that follows our guidelines. |
Issue for this PR
Closes #
Type of change
What does this PR do?
Please provide a description of the issue, the changes you made to fix it, and why they work. It is expected that you understand why your changes work and if you do not understand why at least say as much so a maintainer knows how much to value the PR.
implemented reasoning into ollama local models and added detections for locally installed ollama models instead of just cloud
How did you verify your code works?
im using it and so am currently testing
Screenshots / recordings
If this is a UI change, please include a screenshot or recording.
Checklist
If you do not follow this template your PR will be automatically rejected.