-
Notifications
You must be signed in to change notification settings - Fork 2.2k
🐛 fix(updater): terminate sidecar processes before update to avoid file access errors #5325
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Minh141120
merged 1 commit into
release/v0.6.0
from
fix/sidecar-process-cleanup-before-update
Jun 17, 2025
Merged
🐛 fix(updater): terminate sidecar processes before update to avoid file access errors #5325
Minh141120
merged 1 commit into
release/v0.6.0
from
fix/sidecar-process-cleanup-before-update
Jun 17, 2025
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed everything up to d60ded6 in 1 minute and 50 seconds. Click for details.
- Reviewed
12
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
1
draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/src/core/setup.rs:292
- Draft comment:
The added call to clean_up() is placed after the if/else block, so it always runs—even when a sidecar process is found and killed. If this behavior is intentional (to ensure all stray processes are terminated), please add a clarifying comment. Also, consider adding a trailing semicolon for consistency. - Reason this comment was not posted:
Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% Looking at the clean_up() function (L439-462), it's a general cleanup that kills both llama-server and cortex-server processes. The function is called at the start of setup_sidecar() (L214) and after handling the kill event. Running it in both success and failure cases of killing the sidecar makes sense as a safety measure. The missing semicolon is a valid style point. The comment asks for clarification about intentional behavior, which violates our rules about not asking authors to explain their intentions. The semicolon suggestion is minor. While the semicolon suggestion is valid, the main thrust of the comment is asking for clarification, which we should avoid. The behavior seems reasonable given the code context. Delete the comment. The missing semicolon is too minor to warrant a comment, and asking for clarification about intentions violates our rules.
Workflow ID: wflow_EN4lusU0gDZHSVfg
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
louis-menlo
approved these changes
Jun 17, 2025
louis-menlo
added a commit
that referenced
this pull request
Jun 20, 2025
* chore: enable shortcut zoom (#5261) * chore: enable shortcut zoom * chore: update shortcut setting * fix: thinking block (#5263) * Merge pull request #5262 from menloresearch/chore/sync-new-hub-data chore: sync new hub data * ✨enhancement: model run improvement (#5268) * fix: mcp tool error handling * fix: error message * fix: trigger download from recommend model * fix: can't scroll hub * fix: show progress * ✨enhancement: prompt users to increase context size * ✨enhancement: rearrange action buttons for a better UX * 🔧chore: clean up logics --------- Co-authored-by: Faisal Amir <[email protected]> * fix: glitch download from onboarding (#5269) * ✨enhancement: Model sources should not be hard coded from frontend (#5270) * 🐛fix: default onboarding model should use recommended quantizations (#5273) * 🐛fix: default onboarding model should use recommended quantizations * ✨enhancement: show context shift option in provider settings * 🔧chore: wording * 🔧 config: add to gitignore * 🐛fix: Jan-nano repo name changed (#5274) * 🚧 wip: disable showSpeedToken in ChatInput * 🐛 fix: commented out the wrong import * fix: masking value MCP env field (#5276) * ✨ feat: add token speed to each message that persist * ♻️ refactor: to follow prettier convention * 🐛 fix: exclude deleted field * 🧹 clean: all the missed console.log * ✨enhancement: out of context troubleshooting (#5275) * ✨enhancement: out of context troubleshooting * 🔧refactor: clean up * ✨enhancement: add setting chat width container (#5289) * ✨enhancement: add setting conversation width * ✨enahncement: cleanup log and change improve accesibility * ✨enahcement: move const beta version * 🐛fix: optional additional_information gpu (#5291) * 🐛fix: showing release notes for beta and prod (#5292) * 🐛fix: showing release notes for beta and prod * ♻️refactor: make an utils env * ♻️refactor: hide MCP for production * ♻️refactor: simplify the boolean expression fetch release note * 🐛fix: typo in build type check (#5297) * 🐛fix: remove onboarding local model and hide the edit capabilities model (#5301) * 🐛fix: remove onboarding local model and hide the edit capabilities model * ♻️refactor: conditional search params setup screen * 🐛fix: hide token speed when assistant params stream false (#5302) * 🐛fix: glitch padding speed token (#5307) * 🐛fix: immediately show download progress (#5308) * 🐛fix:safely convert values to numbers and handle NaN cases (#5309) * chore: correct binary name for stable version (#5303) (#5311) Co-authored-by: hiento09 <[email protected]> * 🐛fix: llama.cpp default NGL setting does not offload all layers to GPU (#5310) * 🐛fix: llama.cpp default NGL setting does not offload all layers to GPU * chore: cover more cases * chore: clean up * fix: should not show GPU section on Mac * 🐛fix: update default extension settings (#5315) * fix: update default extension settings * chore: hide language setting on Prod * 🐛fix: allow script posthog (#5316) * Sync 0.5.18 to 0.6.0 (#5320) * chore: correct binary name for stable version (#5303) * ci: enable devtool on prod build (#5317) * ci: enable devtool on prod build --------- Co-authored-by: hiento09 <[email protected]> Co-authored-by: Nguyen Ngoc Minh <[email protected]> * fix: glitch model download issue (#5322) * 🐛 fix(updater): terminate sidecar processes before update to avoid file access errors (#5325) * 🐛 fix: disable sorting for threads in SortableItem and clean up thread order handling (#5326) * improved wording in UI elements (#5323) * fix: sorted-thread-not-stable (#5336) * 🐛fix: update wording desc vulkan (#5338) * 🐛fix: update wording desc vulkan * ✨enhancement: update copy * 🐛fix: handle NaN value tokenspeed (#5339) * 🐛 fix: window path problem * feat(server): filter /models endpoint to show only downloaded models (#5343) - Add filtering logic to proxy server for GET /models requests - Keep only models with status "downloaded" in response - Remove Content-Length header to prevent mismatch after filtering - Support both ListModelsResponseDto and direct array formats - Add comprehensive tests for filtering functionality - Fix Content-Length header conflict causing empty responses Fixes issue where all models were returned regardless of download status. * 🐛fix: render streaming token speed based on thread ID & assistant metadata (#5346) * fix(server): add gzip decompression support for /models endpoint filtering (#5349) - Add gzip detection using magic number check (0x1f 0x8b) - Implement gzip decompression before JSON parsing - Add gzip re-compression for filtered responses - Fix "invalid utf-8 sequence" error when upstream returns gzipped content - Maintain Content-Encoding consistency for compressed responses - Add comprehensive gzip handling with flate2 library Resolves issue where filtering failed on gzip-compressed model responses. * fix(proxy): implement true HTTP streaming for chat completions API (#5350) * fix: glitch toggle gpus (#5353) * fix: glitch toogle gpu * fix: Using the GPU's array index as a key for gpuLoading * enhancement: added try-finally * fix: built in models capabilities (#5354) * 🐛fix: setting provider hide model capabilities (#5355) * 🐛fix: setting provider hide model capabilities * 🐛fix: hide tools icon on dropdown model providers * fix: stop server on app close or reload * ✨enhancement: reset heading class --------- Co-authored-by: Louis <[email protected]> * fix: stop api server on page unload (#5356) * fix: stop api server on page unload * fix: check api server status on reload * refactor: api server state * fix: should not pop the guard * 🐛fix: avoid render html title thread (#5375) * 🐛fix: avoid render html title thread * chore: minor bump - tokenjs for manual adding models --------- Co-authored-by: Louis <[email protected]> --------- Co-authored-by: Faisal Amir <[email protected]> Co-authored-by: LazyYuuki <[email protected]> Co-authored-by: Bui Quang Huy <[email protected]> Co-authored-by: hiento09 <[email protected]> Co-authored-by: Nguyen Ngoc Minh <[email protected]> Co-authored-by: Sam Hoang Van <[email protected]> Co-authored-by: Ramon Perez <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This pull request introduces a minor change to the
src-tauri/src/core/setup.rs
file. The change ensures that theclean_up()
function is called when a kill event is received but no active sidecar process is found to kill.Important
Calls
clean_up()
insetup.rs
when a kill event is received but no active sidecar process is found, ensuring termination of lingering sidecar processes.clean_up()
insetup.rs
when a kill event is received but no active sidecar process is found.This description was created by
for d60ded6. You can customize this summary. It will automatically update as commits are pushed.