Skip to content

Conversation

@cbruyndoncx
Copy link
Contributor

@cbruyndoncx cbruyndoncx commented Sep 8, 2025

Pull Request Description

Description

  • This PR completes end-to-end support for custom providers and significantly improves the provider management experience in the desktop UI. It wires custom provider config through core config → provider registry → API client, expands the provider edit modal to expose more fields (base URL, headers/secrets, model selection, etc.), and refactors the Providers grid for a better UX consistent with the extensions grid.

Key changes

  • Backend / core

    • Add/extend custom provider configuration handling: crates/goose/src/config/custom_providers.rs
    • Make the provider registry and API client accept dynamic base URLs, headers, and secret injection: crates/goose/src/providers/provider_registry.rs, crates/goose/src/providers/api_client.rs
    • Adjust formats/OpenAI handling to be compatible with custom endpoints: crates/goose/src/providers/formats/openai.rs, crates/goose/src/providers/openai.rs
    • Expose provider config management to the UI and fix small reply route issues: crates/goose-server/src/routes/config_management.rs, crates/goose-server/src/routes/reply.rs
  • UI (Electron / React)

    • Reworked Providers grid for improved layout/controls: ui/desktop/src/components/settings/providers/ProviderGrid.tsx
    • Redesigned provider configuration modal to support default/custom flows and surface additional fields:
      • ui/desktop/src/components/settings/providers/modal/ProviderConfiguationModal.tsx
      • ui/desktop/src/components/settings/providers/modal/subcomponents/forms/CustomProviderForm.tsx
      • ui/desktop/src/components/settings/providers/modal/subcomponents/forms/DefaultProviderSetupForm.tsx
      • ui/desktop/src/components/settings/providers/modal/subcomponents/handlers/DefaultSubmitHandler.tsx
    • Added small helper: ui/desktop/src/utils/secretConstants.ts

Motivation

  • Custom/dynamic providers were only partially supported. Users need to be able to add, edit, and use custom providers from the desktop UI (including custom endpoints, headers, and models). The UI previously exposed a limited set of provider fields and the providers UX did not match the extensions grid. This PR finishes the integration and brings the UX into parity with extensions.

User-visible changes

  • Users can:
    • Add and configure custom providers from Settings → Providers (set base URL, auth header/secret, models).
    • Edit default providers with more exposed configuration fields.
    • Interact with an improved Providers grid that aligns with the extensions experience.

Tests completed:

  • create new custom providers (openai / ollama - no anthropic tests done)
  • use custom provider (with openai/gpt-oss via openrouter and lmstudio)
  • edit (custom) provider settings and verify json put request on all details
  • verify api key changes working (true left untouched, real api key changes applied)
  • verify model list is fetched for custom providers too

Known issues:

  • CLI changes not tackled here, CLI shows wrong ENV vars, will be tackled next
  • A lot of logging added to try to fix the openai compatible issues while testing with hyperbolic.xyz. Found that not all openai options are supported in hyperbolic, comment in source which one to remove, but this needs broader design discussion. So the logging does not hurt, but best to remove when a better way is found to surface the issues with (custom) providers.

Screenshots

Provider grid

image

Edit custom provider configuration

Exposing all fields from the json custom providers config
image

Edit default providers configuration

Exposing all environment vars fields in the UI
image

… improving the providers edit box exposing more fields, and the grid for UX experience, similar to extensions

Signed-off-by: Carine Bruyndoncx <bruyndoncx@gmail.com>
@cbruyndoncx
Copy link
Contributor Author

@angiejones providers looks a bit like extensions now ...
Only should it have the wheel button too as long as it is not configured and a button to actuallty "edit the provider config" ?

@cbruyndoncx
Copy link
Contributor Author

@zanesq the display of the labels has 2 ways to generate the label, i have not touched that. It is possible to see when it is an environment variable, but the label removes the underscore (placeholder has a space i think), so it is not truely obvious what environment variable it should be.
I am under the impression 2 people approached it differently ....

@taniandjerry
Copy link
Contributor

@alexhancock replied on Discord, but commenting here that he will take a look today!

@cbruyndoncx
Copy link
Contributor Author

@alexhancock gentle reminder

@taniandjerry
Copy link
Contributor

Let me tag other team members across goose dev as well since there has been recent issues and items to review! @DOsinga @michaelneale @zanesq @jamadeo

Copy link
Collaborator

@jamadeo jamadeo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like the direction and the attention to custom provider config -- thank you @cbruyndoncx ! I think we could get this merged more quickly if you are able to break up the changes, though. Maybe one for the format/streaming changes, one for the configuration changes at least?

Also, for the format changes: can you share some context for which providers need this? It would be good to test them out or at least be generally aware of the variations on openai's format we see in the wild.

}
}
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was slightly confusing to me -- where do we have a secret with a boolean and value and why should it be skipped?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this is because our API returns true when getting a secret value or false if there is no value. however, I don't think we should block setting secret values to true - it is not much a of a value to keep secret. we should just not do this write. is there a particular case where this is happening?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

in the windows ui the true value seemed to cause keyring issues. I have tried a detailed split , but that failed. Retry with a subset, to get the custom provider bugs fixed first.

.await;
// also send a visible assistant message so the UI shows it inline
let assistant_msg = Message::assistant().with_text(format!("Provider error: {}", err_text));
let _ = stream_event(MessageEvent::Message { message: assistant_msg }, &task_tx, &cancel_token).await;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if we want to show the error (I thought we already did?) I think it would be better to just render the Error event instead of stream it as a Message

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when does this happen?

// include the error text so renderers can style it as an error.
let msg_text = format!("LLM streaming error encountered. See details below:\n{}", details);

let message = Message::assistant().with_content(MessageContent::text(msg_text));
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we do need to show errors while keeping the stream alive (and I could see how that makes sense) we should have instead a non-terminating error type where we can return these without putting them in a Message

console.debug('API client get failed, falling back to fetch', e);
}

// Fallback to a direct fetch
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why the fallback? is it because client wasn't initialized with the right endpoint? that should I think be working better now as of #4338 and we shouldn't need this

path: { id: currentProvider.name },
headers: { 'X-Secret-Key': secretKey },
});
const body =
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should use the requests in sdk.gen.ts

@jamadeo jamadeo requested a review from DOsinga September 12, 2025 14:40
Copy link
Collaborator

@DOsinga DOsinga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this a lot, makes things right that weren't the first time.

I left some comments, but there's two things that stand out. I don't know why the custom providers should interact with their own custom settings, that seems confusing.

also can you look at the client code; it looks a little vibe coded.

}
}
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think this is because our API returns true when getting a secret value or false if there is no value. however, I don't think we should block setting secret values to true - it is not much a of a value to keep secret. we should just not do this write. is there a particular case where this is happening?

(status = 500, description = "Internal server error")
)
)]
pub async fn update_custom_provider(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you remove the over eager LLM comments here?

axum::extract::Path(id): axum::extract::Path<String>,
Json(request): Json<UpdateCustomProviderRequest>,
) -> Result<Json<String>, StatusCode> {
verify_secret_key(&headers, &state)?;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we got rid of this, but presumably syncing to main will tell you

.await;
// also send a visible assistant message so the UI shows it inline
let assistant_msg = Message::assistant().with_text(format!("Provider error: {}", err_text));
let _ = stream_event(MessageEvent::Message { message: assistant_msg }, &task_tx, &cancel_token).await;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

when does this happen?

.filter(|s| s.starts_with("LLM streaming error encountered"))
{
// Send a short error string (avoid flooding the SSE with huge payloads)
let short = if first_text.len() > 1024 {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

don't use .len or [..] for text since that breaks CJK. we have a function to this, safe_truncate

!configValues[parameter.name]
) {
newValues[parameter.name] = String(parameter.default);
if (parameter.default !== undefined && parameter.default !== null) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is the scenario where this is the right thing to do?

if check_context_length_exceeded(&payload_str) {
ProviderError::ContextLengthExceeded(payload_str)
} else {
// Try multiple ways to extract a useful message
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's not use "Unknown error" as a way to flag if we have already handled this case (for one thing if the error is really unknown, we might now return the payload_str). but also, what are we trying to cover here? which providers have these specific behaviors we're trying to rescue from?

// Some providers put a 'name' (tool name) here instead of in tool_calls
name: Option<String>,
// OpenAI/variants may include a 'refusal' field with refusal details
refusal: Option<Value>,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here and elsewhere, we talk about "some providers". I think we need to explicitly mention which ones we are talking about. in an ideal world we would just have a dialect flag but I can see how that is hard from a custom provider perspective

// global config/keyring. Previously this used the constant
// "CUSTOM_PROVIDER_BASE_URL" for every provider which caused different
// providers to read/write the same key and mix up values stored in the
// keyring or config file.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

previously? the reader doesn't care how this previously worked

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously if you just had a single custom provider it worked, but failed with the second one

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I understand the comment, but comments should help the reader understand the current code. Either way, custom providers should not use environment variables where the values are defined for the custom provider itself.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, that i understand now, trying to figure out when/where they are introduced. I feel like i am on a treasure hunt in the code ...
I thought environment variables were always allowed, but i understand it slightly better now how this new design is supposed to work.

) -> Result<()> {
let configs = load_custom_providers(dir)?;

// Detect legacy shared key usage in keyring/config that could override
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure I follow this. I don't think custom providers should support custom proivder base urls beyond what they specify in the json. why do we look at the config here?

in general I don't think custom providers should access the config at all. they are already custom

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Understood now - things become clearer

@cbruyndoncx
Copy link
Contributor Author

Thanks for the review. Yes, Goose vibed with me.
I noticed almost no comments in the goose source.

I have merged v1.8.0 locally, and creating detailed PRs for the different pieces.

@DOsinga
I don't know why the custom providers should interact with their own custom settings, that seems confusing.

I don't understand your remark

@DOsinga
Copy link
Collaborator

DOsinga commented Sep 16, 2025

sorry for the slow reply.

what I mean is that a custom provider doesn't need an environment variable to override its base url; it's a custom provider, it already has a custom base url. etc. does that make sense?

@cbruyndoncx
Copy link
Contributor Author

@DOsinga I am closing this initial big PR in favor of #4781 which has only the necessary changes to get the openai compatible working, tested with openrouter.
Video in added in new PR.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants