You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ollama offers the possibility to use Deepseek-coder-v2 model (one of the cheapest and best performing models for coding), but the 236B model is really heavy to be run locally. However, the deepseek API allows to use it, and it has great results as a code assistant.
It would be awesome if it could be integrated within the codecompanion plugin.
The text was updated successfully, but these errors were encountered:
At the moment, I focus more on open models running locally. But maybe you can override some of the functions to send it to the LLM to support the deepseek API?
Ollama offers the possibility to use Deepseek-coder-v2 model (one of the cheapest and best performing models for coding), but the 236B model is really heavy to be run locally. However, the deepseek API allows to use it, and it has great results as a code assistant.
It would be awesome if it could be integrated within the codecompanion plugin.
The text was updated successfully, but these errors were encountered: