Skip to content

Add lmstudio declarative provider and fix up a few rough edges#7454

Closed
zanesq wants to merge 3 commits intomainfrom
zane/lmstudio
Closed

Add lmstudio declarative provider and fix up a few rough edges#7454
zanesq wants to merge 3 commits intomainfrom
zane/lmstudio

Conversation

@zanesq
Copy link
Copy Markdown
Contributor

@zanesq zanesq commented Feb 23, 2026

Summary

Added LM Studio as a built-in provider for Goose so its easier to setup. Also fixed several issues
that made the local model experience poor.

PROBLEMS:

  • No built-in LM Studio support — users had to manually create a custom provider or use the openai compatible
  • A recently merged PR (feat: add Moonshot and Kimi Code declarative providers #7304) introduced malformed JSON configs for Kimi/Moonshot
    that silently broke ALL declarative provider loading (Groq, DeepSeek, Cerebras, Mistral, Inception, OVHcloud all disappeared from the UI)
  • When local models returned errors in streaming responses (like context length
    exceeded), users saw cryptic "Failed to parse streaming chunk: missing field
    choices" instead of the actual error message from the server
  • Local models that use inline tags for chain-of-thought reasoning had
    raw tags and thinking content leaked into both chat messages and session names
  • The "Show reasoning" UI section had minimal styling compared to "Show thinking"
  • New chat sessions that failed to load got stuck — clicking "Start New Chat"
    kept reusing the broken session instead of creating a fresh one

FIXES:

  • Added LM Studio declarative provider config (OpenAI-compatible, localhost:1234,
    no auth, dynamic model fetching)
  • Fixed kimi.json and moonshot.json to use the correct schema, restoring all
    declarative providers
  • Made declarative provider loading resilient — one bad JSON file no longer breaks
    all providers
  • Improved OpenAI streaming parser to extract actual server error messages from
    non-standard error responses embedded in streams
  • Added tag detection in the streaming parser that buffers thinking
    content and emits it as proper Reasoning content (shown in a collapsible UI
    section) while only showing the actual response as text
  • Fixed session naming to strip inline thinking content so sessions get meaningful
    names instead of "1. Analyze the Request:..."
  • Styled the "Show reasoning" collapsible section with rounded background and
    border matching the existing "Show thinking" section
  • Fixed "Start New Chat" to skip errored sessions instead of reusing them, by
    tracking session error state via existing status update events

Visually differentiate thinking message in chat and fix it so inline thinking is pulled out
before
image
after
Screenshot 2026-02-23 at 3 21 04 PM

Thinking message was getting in the session autogenerated title like this before the fix
Screenshot 2026-02-23 at 2 23 28 PM

@zanesq
Copy link
Copy Markdown
Contributor Author

zanesq commented Feb 24, 2026

Closing for more investigation / testing, will put up a separate PR for just the declarative provider changes

@zanesq zanesq closed this Feb 24, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant