-
Notifications
You must be signed in to change notification settings - Fork 2.4k
feat(ui): add text streaming hook and update GooseMessage to stream responses #2898
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(ui): add text streaming hook and update GooseMessage to stream responses #2898
Conversation
… and update .gitignore
… control and update .gitignore" This reverts commit 0c09109.
|
very nice @chaitanyarahalkar - didn't require changes to providers? cc @jamadeo - you like this approach? look sgood to me |
|
one problem is this looks odd with tool results, let me attach a recording (using databricks + claude4): streaming2.mov |
|
I think clock time needs to be no slower than now for this to appear believable - and hopefully we could have real streaming from the providers? |
michaelneale
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think for tool cases shouldn't be slower than current wall clock time - so probably requires streaming support fro provider end first?
Yup, I can add this fix here and probably in a separate PR add streaming from the backend (directly through the API) for models that support it. I think only OpenAI does it for now? |
|
@Kvadratni ideally we need it natively from the LLM API, will take a crack at doing that for OpenAI models. This is just perceptively doing it on the UI end |
yup, provider is not streaming for now |
|
This was my take on adding streaming: #2677 though I've let the PR get stale. I'll have merge and fix some conflicts. I started just with Databricks because it's what I typically use but most of the plumbing is there for any other provider too. |
Nice I'll try to build off the changes here and see if I can make it work for OpenAI's APIs |
|
sorry for taking so long - it should be fixed in main now! thanks! |

This PR brings real-time text streaming to the desktop UI:
useTextStreaming.ts– reusable React hook that opens a Server-Sent Events (SSE) connection, parses incremental tokens, and exposes a clean streaming API to components.GooseMessage.tsx– refactored to consume the new hook and render partial LLM responses as they arrive, giving users immediate feedback instead of waiting for the full completion.Context & Motivation
Large responses that appear all at once feel sluggish and block the conversation. Streaming completions improve perceived performance and match the interactive feel of modern chat apps. This work lays the foundation for richer UX features like inline code execution progress, token-level highlighting, and cancel/resume.
Implementation Details
GooseMessagenow:Testing
Demo
Before
Screen.Recording.2025-06-12.at.8.47.51.PM.mov
After
Screen.Recording.2025-06-12.at.9.09.18.PM.mov
🔗 Related Issues / PRs