Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions crates/goose/src/providers/canonical/name_builder.rs
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ pub fn map_provider_name(provider: &str) -> &str {
"gcp_vertex_ai" => "google-vertex",
"gemini_oauth" => "google",
"zhipu" => "zhipuai",
"novita" => "novita-ai",
_ => provider,
}
}
Expand Down
37 changes: 37 additions & 0 deletions crates/goose/src/providers/declarative/novita.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
{
"name": "novita",
Comment thread
Alex-wuhu marked this conversation as resolved.
"engine": "openai",
"display_name": "Novita AI",
"description": "90+ open-source models with OpenAI-compatible API and competitive pricing",
"api_key_env": "NOVITA_API_KEY",
"base_url": "https://api.novita.ai/openai/chat/completions",
"catalog_provider_id": "novita-ai",
"models": [
{
"name": "moonshotai/kimi-k2.5",
"context_limit": 262144,
"max_tokens": 262144
},
{
"name": "minimax/minimax-m2.7",
"context_limit": 204800,
"max_tokens": 131072
},
{
"name": "zai-org/glm-5.1",
"context_limit": 204800,
"max_tokens": 131072
},
{
"name": "deepseek/deepseek-v3.2",
"context_limit": 163840,
"max_tokens": 65536
},
{
"name": "google/gemma-4-31b-it",
"context_limit": 262144,
"max_tokens": 131072
}
],
"supports_streaming": true
}
40 changes: 40 additions & 0 deletions documentation/docs/getting-started/providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@ goose is compatible with a wide range of LLM providers, allowing you to choose a
| [LiteLLM](https://docs.litellm.ai/docs/) | LiteLLM proxy supporting multiple models with automatic prompt caching and unified API access. | `LITELLM_HOST`, `LITELLM_BASE_PATH` (optional), `LITELLM_API_KEY` (optional), `LITELLM_CUSTOM_HEADERS` (optional), `LITELLM_TIMEOUT` (optional) |
| [LM Studio](https://lmstudio.ai/) | Run local models with LM Studio's OpenAI-compatible server. **Because this provider runs locally, you must first [download a model](#local-llms).** | None required. Connects to local server at `localhost:1234` by default. |
| [Mistral AI](https://mistral.ai/) | Provides access to Mistral models including general-purpose models, specialized coding models (Codestral), and multimodal models (Pixtral). | `MISTRAL_API_KEY` |
| [Novita AI](https://novita.ai/) | 90+ open-source models with OpenAI-compatible API and competitive pricing. Supports Kimi K2.5, DeepSeek, GLM, MiniMax, Qwen, and more. | `NOVITA_API_KEY` |
| [Ollama](https://ollama.com/) | Local model runner supporting Qwen, Llama, DeepSeek, and other open-source models. **Because this provider runs locally, you must first [download and run a model](#local-llms).** | `OLLAMA_HOST` |
| [OpenAI](https://platform.openai.com/api-keys) | Provides gpt-4o, o1, and other advanced language models. Also supports OpenAI-compatible endpoints (e.g., self-hosted LLaMA, vLLM, KServe). **o1-mini and o1-preview are not supported because goose uses tool calling.** | `OPENAI_API_KEY`, `OPENAI_HOST` (optional), `OPENAI_ORGANIZATION` (optional), `OPENAI_PROJECT` (optional), `OPENAI_CUSTOM_HEADERS` (optional) |
| [OpenRouter](https://openrouter.ai/) | API gateway for unified access to various models with features like rate-limiting management. | `OPENROUTER_API_KEY` |
Expand Down Expand Up @@ -693,6 +694,45 @@ To set up Groq with goose, follow these steps:
</TabItem>
</Tabs>

### Novita AI
[Novita AI](https://novita.ai/) provides access to 90+ open-source models via an OpenAI-compatible API with competitive pricing. To use Novita AI with goose, you need an API key from [Novita AI](https://novita.ai/settings#key-management).

Novita AI offers many models that support tool calling, including:
- **moonshotai/kimi-k2.5** - Moonshot's latest model with 262K context window
- **minimax/minimax-m2.7** - MiniMax M2.7 with 205K context
- **zai-org/glm-5.1** - Zhipu's GLM-5.1 with 205K context
- **deepseek/deepseek-v3.2** - DeepSeek V3.2 with 164K context
- **google/gemma-4-31b-it** - Google Gemma 4 31B with 262K context

For the complete list of supported Novita AI models, see [novita.json](https://github.com/aaif-goose/goose/blob/main/crates/goose/src/providers/declarative/novita.json).

To set up Novita AI with goose, follow these steps:

<Tabs groupId="interface">
<TabItem value="ui" label="goose Desktop" default>
**To update your LLM provider and API key:**

1. Click the <PanelLeft className="inline" size={16} /> button in the top-left to open the sidebar.
2. Click the `Settings` button on the sidebar.
3. Click the `Models` tab.
4. Click `Configure Providers`
5. Choose `Novita AI` as provider from the list.
6. Click `Configure`, enter your API key, and click `Submit`.
7. Select the Novita AI model of your choice.

</TabItem>
<TabItem value="cli" label="goose CLI">
1. Run:
```sh
goose configure
```
2. Select `Configure Providers` from the menu.
3. Follow the prompts to choose `Novita AI` as the provider.
4. Enter your API key when prompted.
5. Select the Novita AI model of your choice.
</TabItem>
</Tabs>

### Google Gemini
Google Gemini provides a free tier. To start using the Gemini API with goose, you need an API Key from [Google AI studio](https://aistudio.google.com/app/apikey).

Expand Down
Loading