-
Notifications
You must be signed in to change notification settings - Fork 2.2k
🐛fix: render streaming token speed based on thread ID & assistant metadata #5346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ssistant metadata
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed everything up to a0389fc in 53 seconds. Click for details.
- Reviewed
55
lines of code in2
files - Skipped
0
files when reviewing. - Skipped posting
3
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. web-app/src/containers/ThreadContent.tsx:366
- Draft comment:
Good use of adding a thread ID check before applying 'hidden'. Ensure that both streamingContent and item.thread_id are always defined and of the same type. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
2. web-app/src/containers/ThreadContent.tsx:445
- Draft comment:
The TokenSpeedIndicator now correctly checks the thread_id. This ensures the indicator is shown only for the correct message stream. - Reason this comment was not posted:
Confidence changes required:0%
<= threshold50%
None
3. web-app/src/hooks/useMessages.ts:37
- Draft comment:
Refining the assistant metadata structure improves clarity. Consider verifying that default empty strings meet downstream type expectations. - Reason this comment was not posted:
Confidence changes required:20%
<= threshold50%
None
Workflow ID: wflow_tZvqUH8LyJWe3aSv
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!!!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed 82fc6d1 in 1 minute and 27 seconds. Click for details.
- Reviewed
12
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
1
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. web-app/src/hooks/useMessages.ts:42
- Draft comment:
Ensure the default value for 'parameters' is of the intended type. If 'parameters' is expected to be an object or array, consider using {} or [] instead of an empty string. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% The code shows a clear pattern of using empty strings as defaults for all assistant properties. Without seeing the type definition of currentAssistant.parameters or how this data is used elsewhere, suggesting a change to {} could actually break consistency or introduce type mismatches. The current approach seems intentionally consistent. I could be wrong about the parameters field - maybe it really is meant to be an object and this is a genuine bug. The type information would make this clear. However, without seeing strong evidence that parameters should be an object (like type definitions or usage), changing the consistent pattern could introduce problems rather than solve them. The comment should be deleted as it makes assumptions about types without clear evidence and the code shows intentional consistency in using empty strings as defaults.
Workflow ID: wflow_LrtneZNgSSlF2qpQ
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
* chore: enable shortcut zoom (#5261) * chore: enable shortcut zoom * chore: update shortcut setting * fix: thinking block (#5263) * Merge pull request #5262 from menloresearch/chore/sync-new-hub-data chore: sync new hub data * ✨enhancement: model run improvement (#5268) * fix: mcp tool error handling * fix: error message * fix: trigger download from recommend model * fix: can't scroll hub * fix: show progress * ✨enhancement: prompt users to increase context size * ✨enhancement: rearrange action buttons for a better UX * 🔧chore: clean up logics --------- Co-authored-by: Faisal Amir <[email protected]> * fix: glitch download from onboarding (#5269) * ✨enhancement: Model sources should not be hard coded from frontend (#5270) * 🐛fix: default onboarding model should use recommended quantizations (#5273) * 🐛fix: default onboarding model should use recommended quantizations * ✨enhancement: show context shift option in provider settings * 🔧chore: wording * 🔧 config: add to gitignore * 🐛fix: Jan-nano repo name changed (#5274) * 🚧 wip: disable showSpeedToken in ChatInput * 🐛 fix: commented out the wrong import * fix: masking value MCP env field (#5276) * ✨ feat: add token speed to each message that persist * ♻️ refactor: to follow prettier convention * 🐛 fix: exclude deleted field * 🧹 clean: all the missed console.log * ✨enhancement: out of context troubleshooting (#5275) * ✨enhancement: out of context troubleshooting * 🔧refactor: clean up * ✨enhancement: add setting chat width container (#5289) * ✨enhancement: add setting conversation width * ✨enahncement: cleanup log and change improve accesibility * ✨enahcement: move const beta version * 🐛fix: optional additional_information gpu (#5291) * 🐛fix: showing release notes for beta and prod (#5292) * 🐛fix: showing release notes for beta and prod * ♻️refactor: make an utils env * ♻️refactor: hide MCP for production * ♻️refactor: simplify the boolean expression fetch release note * 🐛fix: typo in build type check (#5297) * 🐛fix: remove onboarding local model and hide the edit capabilities model (#5301) * 🐛fix: remove onboarding local model and hide the edit capabilities model * ♻️refactor: conditional search params setup screen * 🐛fix: hide token speed when assistant params stream false (#5302) * 🐛fix: glitch padding speed token (#5307) * 🐛fix: immediately show download progress (#5308) * 🐛fix:safely convert values to numbers and handle NaN cases (#5309) * chore: correct binary name for stable version (#5303) (#5311) Co-authored-by: hiento09 <[email protected]> * 🐛fix: llama.cpp default NGL setting does not offload all layers to GPU (#5310) * 🐛fix: llama.cpp default NGL setting does not offload all layers to GPU * chore: cover more cases * chore: clean up * fix: should not show GPU section on Mac * 🐛fix: update default extension settings (#5315) * fix: update default extension settings * chore: hide language setting on Prod * 🐛fix: allow script posthog (#5316) * Sync 0.5.18 to 0.6.0 (#5320) * chore: correct binary name for stable version (#5303) * ci: enable devtool on prod build (#5317) * ci: enable devtool on prod build --------- Co-authored-by: hiento09 <[email protected]> Co-authored-by: Nguyen Ngoc Minh <[email protected]> * fix: glitch model download issue (#5322) * 🐛 fix(updater): terminate sidecar processes before update to avoid file access errors (#5325) * 🐛 fix: disable sorting for threads in SortableItem and clean up thread order handling (#5326) * improved wording in UI elements (#5323) * fix: sorted-thread-not-stable (#5336) * 🐛fix: update wording desc vulkan (#5338) * 🐛fix: update wording desc vulkan * ✨enhancement: update copy * 🐛fix: handle NaN value tokenspeed (#5339) * 🐛 fix: window path problem * feat(server): filter /models endpoint to show only downloaded models (#5343) - Add filtering logic to proxy server for GET /models requests - Keep only models with status "downloaded" in response - Remove Content-Length header to prevent mismatch after filtering - Support both ListModelsResponseDto and direct array formats - Add comprehensive tests for filtering functionality - Fix Content-Length header conflict causing empty responses Fixes issue where all models were returned regardless of download status. * 🐛fix: render streaming token speed based on thread ID & assistant metadata (#5346) * fix(server): add gzip decompression support for /models endpoint filtering (#5349) - Add gzip detection using magic number check (0x1f 0x8b) - Implement gzip decompression before JSON parsing - Add gzip re-compression for filtered responses - Fix "invalid utf-8 sequence" error when upstream returns gzipped content - Maintain Content-Encoding consistency for compressed responses - Add comprehensive gzip handling with flate2 library Resolves issue where filtering failed on gzip-compressed model responses. * fix(proxy): implement true HTTP streaming for chat completions API (#5350) * fix: glitch toggle gpus (#5353) * fix: glitch toogle gpu * fix: Using the GPU's array index as a key for gpuLoading * enhancement: added try-finally * fix: built in models capabilities (#5354) * 🐛fix: setting provider hide model capabilities (#5355) * 🐛fix: setting provider hide model capabilities * 🐛fix: hide tools icon on dropdown model providers * fix: stop server on app close or reload * ✨enhancement: reset heading class --------- Co-authored-by: Louis <[email protected]> * fix: stop api server on page unload (#5356) * fix: stop api server on page unload * fix: check api server status on reload * refactor: api server state * fix: should not pop the guard * 🐛fix: avoid render html title thread (#5375) * 🐛fix: avoid render html title thread * chore: minor bump - tokenjs for manual adding models --------- Co-authored-by: Louis <[email protected]> --------- Co-authored-by: Faisal Amir <[email protected]> Co-authored-by: LazyYuuki <[email protected]> Co-authored-by: Bui Quang Huy <[email protected]> Co-authored-by: hiento09 <[email protected]> Co-authored-by: Nguyen Ngoc Minh <[email protected]> Co-authored-by: Sam Hoang Van <[email protected]> Co-authored-by: Ramon Perez <[email protected]>
Describe Your Changes
This pull request introduces updates to improve conditional rendering and state management in the
ThreadContent
component and refines the structure of theassistant
metadata in theuseMessages
hook. The changes enhance functionality and ensure more precise handling of data.Updates to
ThreadContent
component:hidden
class is applied only whenstreamingContent.thread_id
matchesitem.thread_id
, enhancing accuracy in UI behavior. (web-app/src/containers/ThreadContent.tsx
, web-app/src/containers/ThreadContent.tsxL366-R369)streaming
prop in theTokenSpeedIndicator
component to include a thread ID match check, ensuring proper streaming indication. (web-app/src/containers/ThreadContent.tsx
, web-app/src/containers/ThreadContent.tsxL442-R457)Refinements in
useMessages
hook:assistant
metadata structure by adding explicit fields (id
,name
,avatar
, andinstructions
) with default values, improving data integrity and clarity. (web-app/src/hooks/useMessages.ts
, web-app/src/hooks/useMessages.tsL37-R42)Fixes Issues
Self Checklist
Important
Fixes rendering logic in
ThreadContent.tsx
and refinesassistant
metadata inuseMessages.ts
for accurate UI behavior and data integrity.ThreadContent.tsx
: Improved conditional rendering to applyhidden
class only whenstreamingContent.thread_id
matchesitem.thread_id
.TokenSpeedIndicator
streaming prop updated to check for thread ID match.useMessages.ts
: Expandedassistant
metadata with fieldsid
,name
,avatar
,instructions
, andparameters
with default values.This description was created by
for 82fc6d1. You can customize this summary. It will automatically update as commits are pushed.