Skip to content

Conversation

Minh141120
Copy link
Member

@Minh141120 Minh141120 commented Jun 17, 2025

This pull request introduces a minor change to the src-tauri/src/core/setup.rs file. The change ensures that the clean_up() function is called when a kill event is received but no active sidecar process is found to kill.


Important

Calls clean_up() in setup.rs when a kill event is received but no active sidecar process is found, ensuring termination of lingering sidecar processes.

  • Behavior:
    • Calls clean_up() in setup.rs when a kill event is received but no active sidecar process is found.
    • Ensures termination of lingering sidecar processes to prevent file access errors during updates.

This description was created by Ellipsis for d60ded6. You can customize this summary. It will automatically update as commits are pushed.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to d60ded6 in 1 minute and 50 seconds. Click for details.
  • Reviewed 12 lines of code in 1 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. src-tauri/src/core/setup.rs:292
  • Draft comment:
    The added call to clean_up() is placed after the if/else block, so it always runs—even when a sidecar process is found and killed. If this behavior is intentional (to ensure all stray processes are terminated), please add a clarifying comment. Also, consider adding a trailing semicolon for consistency.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable: usefulness confidence = 10% vs. threshold = 50% Looking at the clean_up() function (L439-462), it's a general cleanup that kills both llama-server and cortex-server processes. The function is called at the start of setup_sidecar() (L214) and after handling the kill event. Running it in both success and failure cases of killing the sidecar makes sense as a safety measure. The missing semicolon is a valid style point. The comment asks for clarification about intentional behavior, which violates our rules about not asking authors to explain their intentions. The semicolon suggestion is minor. While the semicolon suggestion is valid, the main thrust of the comment is asking for clarification, which we should avoid. The behavior seems reasonable given the code context. Delete the comment. The missing semicolon is too minor to warrant a comment, and asking for clarification about intentions violates our rules.

Workflow ID: wflow_EN4lusU0gDZHSVfg

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

@Minh141120 Minh141120 merged commit 3f07358 into release/v0.6.0 Jun 17, 2025
15 checks passed
@Minh141120 Minh141120 deleted the fix/sidecar-process-cleanup-before-update branch June 17, 2025 16:43
@github-project-automation github-project-automation bot moved this to QA in Jan Jun 17, 2025
@github-actions github-actions bot added this to the v0.5.19 milestone Jun 17, 2025
louis-menlo added a commit that referenced this pull request Jun 20, 2025
* chore: enable shortcut zoom (#5261)

* chore: enable shortcut zoom

* chore: update shortcut setting

* fix: thinking block (#5263)

* Merge pull request #5262 from menloresearch/chore/sync-new-hub-data

chore: sync new hub data

* ✨enhancement: model run improvement (#5268)

* fix: mcp tool error handling

* fix: error message

* fix: trigger download from recommend model

* fix: can't scroll hub

* fix: show progress

* ✨enhancement: prompt users to increase context size

* ✨enhancement: rearrange action buttons for a better UX

* 🔧chore: clean up logics

---------

Co-authored-by: Faisal Amir <[email protected]>

* fix: glitch download from onboarding (#5269)

* ✨enhancement: Model sources should not be hard coded from frontend (#5270)

* 🐛fix: default onboarding model should use recommended quantizations (#5273)

* 🐛fix: default onboarding model should use recommended quantizations

* ✨enhancement: show context shift option in provider settings

* 🔧chore: wording

* 🔧 config: add to gitignore

* 🐛fix: Jan-nano repo name changed (#5274)

* 🚧 wip: disable showSpeedToken in ChatInput

* 🐛 fix: commented out the wrong import

* fix: masking value MCP env field (#5276)

* ✨ feat: add token speed to each message that persist

* ♻️ refactor: to follow prettier convention

* 🐛 fix: exclude deleted field

* 🧹 clean: all the missed console.log

* ✨enhancement: out of context troubleshooting (#5275)

* ✨enhancement: out of context troubleshooting

* 🔧refactor: clean up

* ✨enhancement: add setting chat width container (#5289)

* ✨enhancement: add setting conversation width

* ✨enahncement: cleanup log and change improve accesibility

* ✨enahcement: move const beta version

* 🐛fix: optional additional_information gpu (#5291)

* 🐛fix: showing release notes for beta and prod (#5292)

* 🐛fix: showing release notes for beta and prod

* ♻️refactor: make an utils env

* ♻️refactor: hide MCP for production

* ♻️refactor: simplify the boolean expression fetch release note

* 🐛fix: typo in build type check (#5297)

* 🐛fix: remove onboarding local model and hide the edit capabilities model (#5301)

* 🐛fix: remove onboarding local model and hide the edit capabilities model

* ♻️refactor: conditional search params setup screen

* 🐛fix: hide token speed when assistant params stream false (#5302)

* 🐛fix: glitch padding speed token (#5307)

* 🐛fix: immediately show download progress (#5308)

* 🐛fix:safely convert values to numbers and handle NaN cases (#5309)

* chore: correct binary name for stable version (#5303) (#5311)

Co-authored-by: hiento09 <[email protected]>

* 🐛fix: llama.cpp default NGL setting does not offload all layers to GPU (#5310)

* 🐛fix: llama.cpp default NGL setting does not offload all layers to GPU

* chore: cover more cases

* chore: clean up

* fix: should not show GPU section on Mac

* 🐛fix: update default extension settings (#5315)

* fix: update default extension settings

* chore: hide language setting on Prod

* 🐛fix: allow script posthog (#5316)

* Sync 0.5.18 to 0.6.0 (#5320)

* chore: correct binary name for stable version (#5303)

* ci: enable devtool on prod build (#5317)

* ci: enable devtool on prod build

---------

Co-authored-by: hiento09 <[email protected]>
Co-authored-by: Nguyen Ngoc Minh <[email protected]>

* fix: glitch model download issue (#5322)

* 🐛 fix(updater): terminate sidecar processes before update to avoid file access errors (#5325)

* 🐛 fix: disable sorting for threads in SortableItem and clean up thread order handling (#5326)

* improved wording in UI elements (#5323)

* fix: sorted-thread-not-stable (#5336)

* 🐛fix: update wording desc vulkan (#5338)

* 🐛fix: update wording desc vulkan

* ✨enhancement: update copy

* 🐛fix: handle NaN value tokenspeed (#5339)

* 🐛 fix: window path problem

* feat(server): filter /models endpoint to show only downloaded models (#5343)

- Add filtering logic to proxy server for GET /models requests
- Keep only models with status "downloaded" in response
- Remove Content-Length header to prevent mismatch after filtering
- Support both ListModelsResponseDto and direct array formats
- Add comprehensive tests for filtering functionality
- Fix Content-Length header conflict causing empty responses

Fixes issue where all models were returned regardless of download status.

* 🐛fix: render streaming token speed based on thread ID & assistant metadata (#5346)

* fix(server): add gzip decompression support for /models endpoint filtering (#5349)

- Add gzip detection using magic number check (0x1f 0x8b)
- Implement gzip decompression before JSON parsing
- Add gzip re-compression for filtered responses
- Fix "invalid utf-8 sequence" error when upstream returns gzipped content
- Maintain Content-Encoding consistency for compressed responses
- Add comprehensive gzip handling with flate2 library

Resolves issue where filtering failed on gzip-compressed model responses.

* fix(proxy): implement true HTTP streaming for chat completions API (#5350)

* fix: glitch toggle gpus (#5353)

* fix: glitch toogle gpu

* fix: Using the GPU's array index as a key for gpuLoading

* enhancement: added try-finally

* fix: built in models capabilities (#5354)

* 🐛fix: setting provider hide model capabilities (#5355)

* 🐛fix: setting provider hide model capabilities

* 🐛fix: hide tools icon on dropdown model providers

* fix: stop server on app close or reload

* ✨enhancement: reset heading class

---------

Co-authored-by: Louis <[email protected]>

* fix: stop api server on page unload (#5356)

* fix: stop api server on page unload

* fix: check api server status on reload

* refactor: api server state

* fix: should not pop the guard

* 🐛fix: avoid render html title thread (#5375)

* 🐛fix: avoid render html title thread

* chore: minor bump - tokenjs for manual adding models

---------

Co-authored-by: Louis <[email protected]>

---------

Co-authored-by: Faisal Amir <[email protected]>
Co-authored-by: LazyYuuki <[email protected]>
Co-authored-by: Bui Quang Huy <[email protected]>
Co-authored-by: hiento09 <[email protected]>
Co-authored-by: Nguyen Ngoc Minh <[email protected]>
Co-authored-by: Sam Hoang Van <[email protected]>
Co-authored-by: Ramon Perez <[email protected]>
@david-menloai david-menloai moved this from QA to Done in Jan Jun 20, 2025
@LazyYuuki LazyYuuki removed this from the v0.7.2 milestone Jul 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

3 participants