Skip to content

[client] refactor auto update workflow#5448

Merged
pappz merged 42 commits intomainfrom
refactor/auto-update
Mar 13, 2026
Merged

[client] refactor auto update workflow#5448
pappz merged 42 commits intomainfrom
refactor/auto-update

Conversation

@pappz
Copy link
Copy Markdown
Collaborator

@pappz pappz commented Feb 25, 2026

Describe your changes

  • Auto-update logic moved out of the UI into a dedicated updatemanager.Manager service that runs in the connection layer. The
    UI no longer polls or checks for updates independently.
  • The update manager supports three modes driven by the management server's auto-update policy:
    1. No policy set by mgm: checks GitHub for the latest version and notifies the user (previous behavior, now centralized)
    2. mgm enforces update: the "About" menu triggers installation directly instead of just downloading the file — user still initiates the action
    3. mgm forces update: installation proceeds automatically without user interaction
  • updateManager lifecycle is now owned by daemon, giving the daemon server direct control via a new TriggerUpdate RPC
  • Introduces EngineServices struct to group external service dependencies passed to NewEngine, reducing its argument count from 11 to 4

Issue ticket number and link

Stack

Checklist

  • Is it a bug fix
  • Is a typo/documentation fix
  • Is a feature enhancement
  • It is a refactor
  • Created tests that fail without the change (if possible)

By submitting this pull request, you confirm that you have read and agree to the terms of the Contributor License Agreement.

Documentation

Select exactly one:

  • I added/updated documentation for this change
  • Documentation is not needed for this change (explain why)

Docs PR URL (required if "docs added" is checked)

Paste the PR link from https://github.com/netbirdio/docs here:

https://github.com/netbirdio/docs/pull/__

Summary by CodeRabbit

  • New Features

    • UI can trigger installation of a pending enforced update via the daemon.
    • Update notifications show version and whether the update is enforced.
    • GUI starts update handling earlier to surface available updates promptly.
  • Refactor

    • Reworked update manager and engine wiring for clearer update states, install flow, and improved cross-platform auto-update behavior.

Remove the UI-side version fetcher and unify all update detection in
updatemanager.Manager, which now supports two modes:
- Mode 1 (download-only): notifies UI about latest version from
pkgs.netbird.io; user opens browser to download
- Mode 2 (enforced): management-pushed version notifies UI with
install prompt; user triggers installer via new TriggerUpdate RPC
- Mode 2 + AlwaysUpdate: skips user interaction, installs directly

Manager is now cross-platform; isAutoUpdateSupported() guards only
the install path. handleUpdate() never auto-installs — installation
is always explicit. Add platform-specific test files for Windows/Darwin
and Linux.
@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Feb 25, 2026

Important

Review skipped

This PR was authored by the user configured for CodeRabbit reviews. CodeRabbit does not review PRs authored by this user. It's recommended to use a dedicated user account to post CodeRabbit review feedback.

⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 0ed978bc-0a02-4b2e-af02-4de6bd41298c

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Centralized and renamed update subsystem (updatemanager → updater), added/upgraded updater APIs and Manager behavior, removed the initial-auto-update boolean from ConnectClient, introduced EngineServices for dependency injection, added TriggerUpdate daemon RPC, and converted UI to event-driven update notifications.

Changes

Cohort / File(s) Summary
Package rename & import updates
client/internal/updater/..., client/internal/updater/doc.go, client/internal/updater/update.go, client/internal/updater/.../reposign/verify.go, client/cmd/signer/*, client/cmd/update_supported.go, client/internal/debug/debug.go, client/server/updateresult.go
Renamed package updatemanagerupdater and updated imports across the repo; mostly path changes with no behavioral logic edits.
ConnectClient signature & wiring
client/android/client.go, client/ios/NetBirdSDK/client.go, client/cmd/up.go, client/embed/embed.go, client/internal/connect.go
Removed trailing boolean from NewConnectClient calls/signature; ConnectClient now holds updateManager *updater.Manager with SetUpdateManager/UpdateManager accessors; inline creation of update-manager at NewConnectClient sites removed.
Engine DI & flow refactor
client/internal/engine.go, client/internal/engine_test.go
Introduced EngineServices grouping external deps; NewEngine now receives EngineServices; removed InitialUpdateHandling and adapted auto-update flow to use injected UpdateManager; tests updated to construct EngineServices.
Updater core & API changes
client/internal/updater/manager.go, client/internal/updater/manager_*.go, client/internal/updater/manager_test.go, client/internal/updater/manager_linux_test.go, client/internal/updater/manager_test_helpers_test.go, client/internal/updater/supported_*.go
Reimplemented Manager: new constructor NewManager, added SetDownloadOnly, SetVersion(..., force), Install, NotifyUI, internal install serialization (tryInstall/install), downloadOnly/force handling, platform support helpers, and many tests; removed prior platform no-op shim.
Server wiring & RPC
client/server/server.go, client/server/server_test.go, client/server/event.go, client/server/triggerupdate.go
Server now initializes and wires updater.Manager, starts update manager for GUI, removed doInitialAutoUpdate flag from connect paths, and exposes TriggerUpdate RPC which calls UpdateManager.Install.
Daemon protobuf
client/proto/daemon.proto
Added rpc TriggerUpdate(TriggerUpdateRequest) returns (TriggerUpdateResponse) and messages; removed optional bool autoUpdate from UpRequest.
UI & event-driven update flow
client/ui/client_ui.go, client/ui/event/event.go, client/ui/event_handler.go, client/ui/profile.go, client/ui/quickactions.go
Converted to event-driven update notifications; removed wannaAutoUpdate argument from menuUpClick and profile callbacks; UI tracks enforced updates and may call daemon TriggerUpdate (with URL fallback).
Installer / reposign import adjustments
client/internal/updater/installer/installer_common.go, client/internal/updater/reposign/verify.go, client/cmd/signer/*
Updated imports to the new updater package layout for installer/downloader/reposign.
Tests updated/added
client/internal/engine_test.go, client/internal/updater/*_test.go, client/server/server_test.go
Updated tests for new EngineServices and ConnectClient signature; added extensive updater unit tests and test helpers (platform-specific files).

Sequence Diagram(s)

sequenceDiagram
    participant UI as UI/Client
    participant Daemon as Daemon/Server
    participant UpMgr as Updater.Manager
    participant Installer as Installer
    participant OS as OS

    UI->>Daemon: TriggerUpdate(ctx)
    activate Daemon
    Daemon->>UpMgr: Install(ctx)
    activate UpMgr
    UpMgr->>UpMgr: tryInstall(ctx, pendingVersion)
    UpMgr->>Installer: RunInstallation(ctx, pendingVersion)
    activate Installer
    Installer->>OS: perform install steps
    Installer-->>UpMgr: result (success/error)
    deactivate Installer
    alt success
        UpMgr->>UI: publish new_version_available event
        UpMgr-->>Daemon: return nil
    else error
        UpMgr-->>Daemon: return error
    end
    deactivate UpMgr
    Daemon-->>UI: TriggerUpdateResponse{success,errorMsg}
    deactivate Daemon
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • mlsmaycon
  • lixmal

"🐰
I hopped through code with care and cheer,
Moved flags away and pulled the update near,
Services bundled tight, RPCs set to run,
Now installations start beneath the sun,
A carrot-toast to builds — hop, patch, and fun!"

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 25.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title '[client] refactor auto update workflow' is concise and accurately describes the main change: a refactoring of the auto-update workflow from the UI layer to the daemon/connection layer, which aligns with the changeset's core objectives.
Description check ✅ Passed The description adequately covers the main changes (auto-update logic migration, three update modes, EngineServices introduction) and includes the required checklist, license agreement confirmation, and documentation decision. However, it lacks an explicit issue ticket link and stack/branch information as shown in the template.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

…ndling

- Ensure consistent update manager initialization across components.
- Refactor event filtering logic for better readability and maintainability.
- Adjust auto-update trigger timeout from 5 to 3 minutes.
- Fix race conditions in TriggerUpdate and startUpdateManagerForGUI.
- Add logging for improved debugging of update-related processes.
- Fix incorrect update icon reset logic on daemon version change.
- Adjust startUpdateManagerForGUI invocation order for consistency.
- Introduce NotifyUI to re-publish update state for reconnecting UIs.
@pappz pappz marked this pull request as ready for review February 25, 2026 17:33
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 11

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
client/internal/updatemanager/manager.go (1)

174-185: ⚠️ Potential issue | 🟠 Major

Avoid partial state mutation before version validation

At Line 174, downloadOnly/forceUpdate are set before parsing expectedVersion. If parsing fails at Line 183, manager state is left half-updated.

Proposed fix
-	m.downloadOnly = false
-	m.forceUpdate = forceUpdate
-
-	if expectedVersion == latestVersion {
-		m.updateToLatestVersion = true
-		m.expectedVersion = nil
-	} else {
-		expectedSemVer, err := v.NewVersion(expectedVersion)
+	var (
+		nextExpected *v.Version
+		nextToLatest bool
+	)
+
+	if expectedVersion == latestVersion {
+		nextToLatest = true
+	} else {
+		expectedSemVer, err := v.NewVersion(expectedVersion)
 		if err != nil {
 			log.Errorf("error parsing version: %v", err)
 			return
 		}
 		if m.expectedVersion != nil && m.expectedVersion.Equal(expectedSemVer) {
 			return
 		}
-		m.expectedVersion = expectedSemVer
-		m.updateToLatestVersion = false
+		nextExpected = expectedSemVer
 	}
+	m.downloadOnly = false
+	m.forceUpdate = forceUpdate
+	m.expectedVersion = nextExpected
+	m.updateToLatestVersion = nextToLatest
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updatemanager/manager.go` around lines 174 - 185, The code
mutates manager state (m.downloadOnly, m.forceUpdate) before validating
expectedVersion, so if v.NewVersion(expectedVersion) fails the manager is left
partially updated; change the flow to validate/parses expectedVersion first
using v.NewVersion(expectedVersion) and handle errors before assigning any
fields on m, then set m.downloadOnly and m.forceUpdate and update
m.updateToLatestVersion / m.expectedVersion only after successful parsing (or
alternatively, parse first and on success assign all related fields together),
referencing the existing symbols m.downloadOnly, m.forceUpdate, v.NewVersion,
m.updateToLatestVersion, and m.expectedVersion.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@client/android/client.go`:
- Around line 160-161: There is a mismatched constructor call to
internal.NewConnectClient: one call was updated to the new 3-argument signature
(ctx, cfg, c.recorder) but the other call still passes 4 arguments which breaks
Android builds; locate the remaining call to internal.NewConnectClient in this
file (the earlier usage that still supplies four parameters) and change it to
the new 3-argument form (remove the extra/4th argument), then adjust any
assignment or subsequent usage expectations (e.g., c.connectClient
initialization) to match the returned ConnectClient from the 3-arg constructor.

In `@client/internal/engine_ssh.go`:
- Line 401: The inline comment is misleading—replace "Install SSH server with
new authorization configuration" with wording that reflects
reconfiguration/update, since this path runs only when e.sshServer != nil inside
updateSSHServerAuth; update the comment to say something like "Update SSH server
authorization configuration" and ensure consistency with the updateSSHServerAuth
function name and the call to e.sshServer.UpdateSSHAuth.

In `@client/internal/engine.go`:
- Around line 809-816: When autoUpdateSettings is nil the code currently returns
without changing the updater mode; update that branch to call
e.updateManager.SetDownloadOnly() before returning so clearing the policy resets
any previous enforced/forced mode back to download-only. Locate the nil check
for autoUpdateSettings in engine.go (the block referencing autoUpdateSettings ==
nil) and invoke e.updateManager.SetDownloadOnly() there, then return.

In `@client/internal/updatemanager/manager_linux_test.go`:
- Around line 61-76: The test uses a too-short timeout when calling
waitForUpdateEvent (currently 10*time.Millisecond) causing flaky async behavior;
update all calls to waitForUpdateEvent in manager_linux_test.go (the ones used
after initial subscribe check and after mockUpdate.onUpdate()) to use a larger,
CI-friendly timeout (e.g., 100*time.Millisecond or 1*time.Second) so the event
delivery has enough time under load—adjust every occurrence of
waitForUpdateEvent(...) in that file accordingly.

In `@client/internal/updatemanager/manager_test_helpers_test.go`:
- Around line 42-44: The test helper currently treats any presence of the
"enforced" key as true; update the logic that reads
event.Metadata["new_version_available"] and event.Metadata["enforced"] to parse
the enforced value as a boolean (e.g., using strconv.ParseBool) instead of
checking key presence, returning the parsed bool (defaulting to false on
missing/invalid values) so "false" string is handled correctly; ensure you
reference event.Metadata["new_version_available"] and event.Metadata["enforced"]
when implementing the change.

In `@client/internal/updatemanager/manager.go`:
- Around line 141-147: When switching modes clear stale enforcement state:
update Manager.SetDownloadOnly to also nil out m.pendingVersion and
m.expectedVersion and reset m.forceUpdate and m.updateToLatestVersion inside the
same m.updateMutex lock so no race occurs; likewise, in the empty-version branch
(the code path that currently sets expectedVersion to nil around the
empty-version handling) explicitly reset m.forceUpdate = false and
m.pendingVersion = nil (and ensure these assignments occur while holding the
same mutex). Also review Install to ensure it no longer relies on a leftover
m.pendingVersion after mode changes.
- Around line 200-214: Both external entry points (Manager.Install and the path
that triggers RunInstallation) can call m.install(...) concurrently; serialize
them by introducing an installation guard. Add a new mutex/flag pair (e.g.,
m.installMutex and m.installing bool) on Manager and, in both Manager.Install
and the code path that calls RunInstallation, acquire m.installMutex, check
m.installing (return an appropriate error or wait), set m.installing = true,
release the mutex, then call m.install(...); ensure you clear m.installing and
properly unlock in a deferred cleanup so the flag is reset on success or error.
Also update any other places that call m.install(...) (including the
RunInstallation caller) to use the same guard so only one installer runs at a
time.

In `@client/server/event.go`:
- Line 17: The call to startUpdateManagerForGUI() from SubscribeEvents is not
synchronized, allowing concurrent subscribers to both initialize the update
manager because updateManager.Start() checks m.cancel without a lock; fix by
adding a single-init guard around startUpdateManagerForGUI() (e.g., a
package-level or receiver-level sync.Once named updateManagerOnce or a
mutex-protected boolean) so only one goroutine can perform initialization, and
ensure updateManager.Start()’s m.cancel check/state changes occur under the same
lock or sync.Once to eliminate the race.

In `@client/server/server.go`:
- Around line 142-146: The update manager is only having CheckUpdateSuccess
called but not started in the daemon lifecycle, which leaves non-GUI flows
without active update processing; after creating the manager via
updatemanager.NewManager (when s.updateManager == nil) call
s.updateManager.Start() (before or immediately after CheckUpdateSuccess) so the
manager runs regardless of GUI hooks — ensure this change is in the same
initialization path where connect() injects the manager into ConnectClient so
Start() is invoked for daemon/non-GUI flows as well.

In `@client/ui/client_ui.go`:
- Around line 851-852: The call to s.conn.Up currently uses the stored s.ctx
which ignores the method's passed-in ctx; update the call to use the local
method context and connection variable by calling conn.Up(ctx,
&proto.UpRequest{}) instead of s.conn.Up(s.ctx, ...), so the RPC respects caller
cancellation/timeouts (look for symbols s.conn.Up, s.ctx, conn and ctx in the
surrounding function).

In `@client/ui/event_handler.go`:
- Around line 226-246: The click handler currently calls conn.TriggerUpdate
synchronously and can block the tray event loop; change it to perform the RPC
with a context timeout and off the UI goroutine: create a context.WithTimeout
(reuse defaultFailTimeout or a small fixed timeout) derived from h.client.ctx,
call conn.TriggerUpdate inside a goroutine, send the response/error on a
channel, and select on the channel or the timeout so the handler returns
immediately; also ensure openURL is invoked in its own goroutine when used as an
error fallback. Reference: getSrvClient, conn.TriggerUpdate, h.client.ctx,
defaultFailTimeout, and openURL.

---

Outside diff comments:
In `@client/internal/updatemanager/manager.go`:
- Around line 174-185: The code mutates manager state (m.downloadOnly,
m.forceUpdate) before validating expectedVersion, so if
v.NewVersion(expectedVersion) fails the manager is left partially updated;
change the flow to validate/parses expectedVersion first using
v.NewVersion(expectedVersion) and handle errors before assigning any fields on
m, then set m.downloadOnly and m.forceUpdate and update m.updateToLatestVersion
/ m.expectedVersion only after successful parsing (or alternatively, parse first
and on success assign all related fields together), referencing the existing
symbols m.downloadOnly, m.forceUpdate, v.NewVersion, m.updateToLatestVersion,
and m.expectedVersion.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9a6a72e and 5cc6040.

⛔ Files ignored due to path filters (2)
  • client/proto/daemon.pb.go is excluded by !**/*.pb.go
  • client/proto/daemon_grpc.pb.go is excluded by !**/*.pb.go
📒 Files selected for processing (26)
  • client/android/client.go
  • client/cmd/up.go
  • client/embed/embed.go
  • client/internal/connect.go
  • client/internal/engine.go
  • client/internal/engine_ssh.go
  • client/internal/engine_test.go
  • client/internal/updatemanager/manager.go
  • client/internal/updatemanager/manager_linux_test.go
  • client/internal/updatemanager/manager_test.go
  • client/internal/updatemanager/manager_test_helpers_test.go
  • client/internal/updatemanager/manager_unsupported.go
  • client/internal/updatemanager/supported_darwin.go
  • client/internal/updatemanager/supported_other.go
  • client/internal/updatemanager/supported_windows.go
  • client/ios/NetBirdSDK/client.go
  • client/proto/daemon.proto
  • client/server/event.go
  • client/server/server.go
  • client/server/server_test.go
  • client/server/triggerupdate.go
  • client/ui/client_ui.go
  • client/ui/event/event.go
  • client/ui/event_handler.go
  • client/ui/profile.go
  • client/ui/quickactions.go
💤 Files with no reviewable changes (1)
  • client/internal/updatemanager/manager_unsupported.go

Comment thread client/android/client.go
Comment thread client/internal/engine_ssh.go Outdated
Comment thread client/internal/engine.go
Comment thread client/internal/updatemanager/manager_linux_test.go Outdated
Comment thread client/internal/updater/manager_test_helpers_test.go
Comment thread client/internal/updater/manager.go
Comment thread client/server/event.go
Comment thread client/server/server.go
Comment thread client/ui/client_ui.go
Comment thread client/ui/event_handler.go Outdated
pappz and others added 2 commits February 25, 2026 18:38
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Introduce `installMutex` and `installing` flags in `updatemanager.Manager` to ensure only one installation process can run at a time. Implement `tryInstall` to handle installation locking and replace direct calls to `install`.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

♻️ Duplicate comments (8)
client/internal/updatemanager/manager_test_helpers_test.go (1)

42-44: enforced is read via key-presence rather than value-parse — "false" would be treated as true.

Line 43 uses the blank identifier to check if the key exists, not its value. A metadata entry "enforced": "false" would incorrectly return enforced = true.

The fix proposed in the prior review (using strconv.ParseBool) still applies here unchanged.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updatemanager/manager_test_helpers_test.go` around lines 42 -
44, The test helper currently treats the presence of the "enforced" key as a
boolean (using blank identifier) which will treat "false" as true; update the
logic in the closure inside manager_test_helpers_test.go (the block returning
val, enforced) to read the string value for event.Metadata["enforced"], use
strconv.ParseBool to parse it (handle parse error by treating as false), and set
the enforced return value to the parsed boolean instead of using key-presence;
ensure you import strconv if not already present.
client/server/event.go (1)

17-17: Race condition on startUpdateManagerForGUI() — invoked unprotected on every subscription.

The previous review already flagged this: concurrent SubscribeEvents callers (e.g., GUI reconnects) can both reach startUpdateManagerForGUI() simultaneously, bypassing any m.cancel != nil guard inside Manager.Start() that is not held under a lock. A sync.Once or mutex-protected init flag is still needed.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/server/event.go` at line 17, Concurrent calls to SubscribeEvents can
race when invoking startUpdateManagerForGUI(), bypassing Manager.Start()'s
internal m.cancel guard; protect the GUI update manager initialization by adding
a sync.Once (or a mutex-protected boolean) around startUpdateManagerForGUI() so
it only runs once across goroutines. Locate the SubscribeEvents callers and
replace the direct call to startUpdateManagerForGUI() with a guarded invocation
(e.g., guiUpdateOnce.Do(startUpdateManagerForGUI) or acquire a mutex, check an
initialized flag, call startUpdateManagerForGUI(), set the flag, and release),
and ensure Manager.Start() still uses m.cancel but no longer relies on it for
once-only init.
client/ui/event_handler.go (1)

214-246: ⚠️ Potential issue | 🟠 Major

Synchronous RPC in the tray event loop can freeze the entire UI.

handleUpdateClick is called directly from the listen select loop (Line 61). The TriggerUpdate RPC at Line 233 uses h.client.ctx (no timeout) and runs synchronously, so an unresponsive daemon will block all tray menu event handling indefinitely.

Proposed fix: run off the event loop with a timeout
 func (h *eventHandler) handleUpdateClick() {
 	h.client.updateIndicationLock.Lock()
 	enforced := h.client.isEnforcedUpdate
 	h.client.updateIndicationLock.Unlock()

 	if !enforced {
 		if err := openURL(version.DownloadUrl()); err != nil {
 			log.Errorf("failed to open download URL: %v", err)
 		}
 		return
 	}

+	go func() {
 		conn, err := h.client.getSrvClient(defaultFailTimeout)
 		if err != nil {
 			log.Errorf("failed to get service client for update: %v", err)
 			_ = openURL(version.DownloadUrl())
 			return
 		}

-		resp, err := conn.TriggerUpdate(h.client.ctx, &proto.TriggerUpdateRequest{})
+		rpcCtx, cancel := context.WithTimeout(h.client.ctx, defaultFailTimeout)
+		defer cancel()
+
+		resp, err := conn.TriggerUpdate(rpcCtx, &proto.TriggerUpdateRequest{})
 		if err != nil {
 			log.Errorf("TriggerUpdate failed: %v", err)
 			_ = openURL(version.DownloadUrl())
 			return
 		}
 		if !resp.Success {
 			log.Errorf("TriggerUpdate failed: %s", resp.ErrorMsg)
 			_ = openURL(version.DownloadUrl())
 			return
 		}

 		log.Infof("update triggered via daemon")
+	}()
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/ui/event_handler.go` around lines 214 - 246, handleUpdateClick
performs a synchronous TriggerUpdate RPC using h.client.ctx on the tray event
loop which can freeze the UI if the daemon is unresponsive; change it to run the
RPC off the event loop with a bounded context/timeout and do not block the
caller: create a new context with timeout (e.g., context.WithTimeout) inside
handleUpdateClick, use that context for conn.TriggerUpdate instead of
h.client.ctx, run the RPC call in a goroutine if needed to avoid blocking the
select loop, and ensure errors/timeouts fall back to opening
version.DownloadUrl() (references: handleUpdateClick, TriggerUpdate,
h.client.ctx, getSrvClient).
client/ui/client_ui.go (1)

851-852: ⚠️ Potential issue | 🟡 Minor

Use the method-scoped context/client for the Up RPC.

Line 851 should use conn.Up(ctx, ...) so cancellation and timeout behavior from the current call is respected.

✅ Proposed fix
-	if _, err := s.conn.Up(s.ctx, &proto.UpRequest{}); err != nil {
+	if _, err := conn.Up(ctx, &proto.UpRequest{}); err != nil {
 		return fmt.Errorf("start connection: %w", err)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/ui/client_ui.go` around lines 851 - 852, The call to s.conn.Up is
using the struct context s.ctx instead of the method-scoped context, so change
the RPC invocation from s.conn.Up(s.ctx, ...) to s.conn.Up(ctx, ...) (use the
local ctx variable) so that cancellation and timeouts of the current call are
respected; update the call site where s.conn.Up(...) is invoked and ensure any
surrounding error handling (the fmt.Errorf wrapping) remains unchanged.
client/internal/updatemanager/manager_linux_test.go (1)

61-61: ⚠️ Potential issue | 🟠 Major

Increase async event wait timeout to avoid flaky Linux tests.

Line 61, Line 76, and Line 105 use 10*time.Millisecond, which is too tight for async event delivery under CI load.

⏱️ Proposed stabilization
+const eventWaitTimeout = 300 * time.Millisecond
...
-		ver, enforced := waitForUpdateEvent(sub, 10*time.Millisecond)
+		ver, enforced := waitForUpdateEvent(sub, eventWaitTimeout)
...
-		ver, enforced = waitForUpdateEvent(sub, 10*time.Millisecond)
+		ver, enforced = waitForUpdateEvent(sub, eventWaitTimeout)
...
-	ver, _ := waitForUpdateEvent(sub, 10*time.Millisecond)
+	ver, _ := waitForUpdateEvent(sub, eventWaitTimeout)

Also applies to: 76-76, 105-105

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updatemanager/manager_linux_test.go` at line 61, The test
uses a too-short async wait timeout (10*time.Millisecond) when calling
waitForUpdateEvent in manager_linux_test.go which causes flaky CI; update the
timeout at each call site (the three occurrences where waitForUpdateEvent is
invoked) to a larger value such as 100*time.Millisecond or 500*time.Millisecond
to allow async event delivery under load—locate the calls to waitForUpdateEvent
in the test file and replace 10*time.Millisecond with the chosen increased
duration.
client/server/server.go (1)

142-146: ⚠️ Potential issue | 🟠 Major

Start the daemon-owned update manager when creating it.

Line 142-146 initializes s.updateManager and calls CheckUpdateSuccess, but does not call Start. Since this instance is passed into ConnectClient (Line 1616), non-GUI flows can run without active update processing.

▶️ Proposed fix
 	if s.updateManager == nil {
 		stateMgr := statemanager.New(s.profileManager.GetStatePath())
 		s.updateManager = updatemanager.NewManager(s.statusRecorder, stateMgr)
+		s.updateManager.Start(s.rootCtx)
 		s.updateManager.CheckUpdateSuccess(s.rootCtx)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/server/server.go` around lines 142 - 146, When initializing
s.updateManager in the block that creates stateMgr via statemanager.New and
assigns s.updateManager = updatemanager.NewManager(s.statusRecorder, stateMgr),
after calling s.updateManager.CheckUpdateSuccess(s.rootCtx) also call
s.updateManager.Start(...) to start the daemon-owned update processing so it
runs for non-GUI flows; ensure you use the same root context (s.rootCtx) or an
appropriate context and any required parameters on updatemanager.Start, so the
manager is active before being passed into ConnectClient.
client/internal/updatemanager/manager.go (2)

141-147: ⚠️ Potential issue | 🟠 Major

Clear stale state and avoid partial policy mutation on mode changes.

Line 143-147 and Line 166-171 do not clear pendingVersion (and empty-version fallback does not fully reset enforcement state). Also, Line 174-185 mutates mode fields before semver validation, so parse failures can leave stale policy active. This can later make Install act on outdated state.

Proposed fix
 func (m *Manager) SetDownloadOnly() {
 	m.updateMutex.Lock()
 	m.downloadOnly = true
 	m.forceUpdate = false
 	m.expectedVersion = nil
 	m.updateToLatestVersion = false
+	m.pendingVersion = nil
 	m.updateMutex.Unlock()

 	select {
 	case m.mgmUpdateChan <- struct{}{}:
 	default:
 	}
 }

 func (m *Manager) SetVersion(expectedVersion string, forceUpdate bool) {
 	log.Infof("set expected agent version for upgrade: %s", expectedVersion)

 	if !isAutoUpdateSupported() {
 		log.Warnf("auto-update not supported on this platform")
 		return
 	}

 	m.updateMutex.Lock()
 	defer m.updateMutex.Unlock()

 	if expectedVersion == "" {
 		log.Errorf("empty expected version provided")
 		m.expectedVersion = nil
 		m.updateToLatestVersion = false
 		m.downloadOnly = true
+		m.forceUpdate = false
+		m.pendingVersion = nil
 		return
 	}

-	m.downloadOnly = false
-	m.forceUpdate = forceUpdate
-
-	if expectedVersion == latestVersion {
-		m.updateToLatestVersion = true
-		m.expectedVersion = nil
-	} else {
-		expectedSemVer, err := v.NewVersion(expectedVersion)
-		if err != nil {
-			log.Errorf("error parsing version: %v", err)
-			return
-		}
-		if m.expectedVersion != nil && m.expectedVersion.Equal(expectedSemVer) {
-			return
-		}
-		m.expectedVersion = expectedSemVer
-		m.updateToLatestVersion = false
-	}
+	var newExpected *v.Version
+	newLatest := false
+	if expectedVersion == latestVersion {
+		newLatest = true
+	} else {
+		expectedSemVer, err := v.NewVersion(expectedVersion)
+		if err != nil {
+			log.Errorf("error parsing version: %v", err)
+			return
+		}
+		if m.expectedVersion != nil && m.expectedVersion.Equal(expectedSemVer) && !forceUpdate {
+			return
+		}
+		newExpected = expectedSemVer
+	}
+	m.downloadOnly = false
+	m.forceUpdate = forceUpdate
+	m.expectedVersion = newExpected
+	m.updateToLatestVersion = newLatest
+	if forceUpdate || newLatest {
+		m.pendingVersion = nil
+	}

 	select {
 	case m.mgmUpdateChan <- struct{}{}:
 	default:
 	}
 }
@@
-	if !downloadOnly && !forceUpdate {
+	if !downloadOnly && !forceUpdate {
 		m.pendingVersion = updateVersion
+	} else {
+		m.pendingVersion = nil
 	}

Also applies to: 166-171, 174-185, 331-333

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updatemanager/manager.go` around lines 141 - 147, When
changing mode in Manager methods (e.g., SetDownloadOnly and the other
mode-mutating routines referenced), ensure you clear pendingVersion and reset
any enforcement state (so no stale pendingVersion or partial enforcement
remains) and perform semver parsing/validation before mutating mode fields; do
all mutations atomically under updateMutex so parse failures do not leave a
partially-applied policy. Concretely: acquire m.updateMutex, parse/validate the
provided version (or determine empty-version fallback) first, on parse error do
not change mode fields but still clear pendingVersion and reset enforcement
flags, and only after successful validation assign
downloadOnly/forceUpdate/updateToLatest/expectedVersion together; apply the same
fix to the other mode-setting functions mentioned (the blocks around the second
mode change and the fallback handling and the code at the later site
referenced).

200-214: ⚠️ Potential issue | 🔴 Critical

Serialize install triggers across UI and forced-update paths.

Line 213 and Line 348 can call m.install(...) concurrently from different entry points. RunInstallation is non-idempotent, so parallel installs are a correctness and reliability risk.

Proposed fix
 type Manager struct {
@@
 	// updateMutex protects update, expectedVersion, updateToLatestVersion,
 	// downloadOnly, forceUpdate, pendingVersion, and lastTrigger fields
 	updateMutex sync.Mutex
+	installing  bool
 }
@@
 func (m *Manager) Install(ctx context.Context) error {
 	if !isAutoUpdateSupported() {
 		return fmt.Errorf("auto-update not supported on this platform")
 	}

 	m.updateMutex.Lock()
+	if m.installing {
+		m.updateMutex.Unlock()
+		return fmt.Errorf("update installation already in progress")
+	}
 	pending := m.pendingVersion
+	if pending == nil {
+		m.updateMutex.Unlock()
+		return fmt.Errorf("no pending version to install")
+	}
+	m.installing = true
 	m.updateMutex.Unlock()

-	if pending == nil {
-		return fmt.Errorf("no pending version to install")
-	}
+	defer func() {
+		m.updateMutex.Lock()
+		m.installing = false
+		m.updateMutex.Unlock()
+	}()

 	return m.install(ctx, pending)
 }
@@
 	if forceUpdate {
-		if err := m.install(ctx, updateVersion); err != nil {
+		m.updateMutex.Lock()
+		if m.installing {
+			m.updateMutex.Unlock()
+			return
+		}
+		m.installing = true
+		m.updateMutex.Unlock()
+
+		err := m.install(ctx, updateVersion)
+
+		m.updateMutex.Lock()
+		m.installing = false
+		m.updateMutex.Unlock()
+
+		if err != nil {
 			log.Errorf("force update failed: %v", err)
 		}
 		return
 	}

Also applies to: 347-351, 363-404

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updatemanager/manager.go` around lines 200 - 214, The Install
path can call m.install concurrently with other entry points (e.g., the
forced-update/RunInstallation path), risking parallel non-idempotent installs;
add a dedicated mutex (e.g., m.installMutex sync.Mutex) to Manager and acquire
it around the actual install invocation in Manager.Install (wrap the call to
m.install(ctx, pending) with m.installMutex.Lock()/Unlock()) and do the same in
the other callers that directly call m.install (the
forced-update/RunInstallation code paths referenced in the review) so all calls
to m.install are serialized; ensure the new mutex is added to the Manager struct
and used consistently in every place that invokes m.install.
🧹 Nitpick comments (1)
client/proto/daemon.proto (1)

230-233: Reserve removed field number to prevent future reuse.

The autoUpdate field (number 3) was removed from UpRequest. Per protobuf best practices, removed field numbers should be reserved to prevent accidental reuse with different semantics, which could cause subtle wire-format incompatibilities with older clients.

 message UpRequest {
   optional string profileName = 1;
   optional string username = 2;
+  reserved 3;
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/proto/daemon.proto` around lines 230 - 233, The removed autoUpdate
field number (3) must be reserved in the UpRequest message to prevent future
reuse; update the proto by adding a reserved declaration inside the UpRequest
message (e.g., reserve field number 3 and the name "autoUpdate") so the message
UpRequest explicitly reserves 3 and the removed field name.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@client/internal/engine_test.go`:
- Around line 263-268: Tests construct EngineServices without populating the
StateManager, causing Engine.Start()/Engine.Stop() to nil-deref e.stateManager;
update every test NewEngine/EngineServices instantiation (where EngineServices
is passed to NewEngine or used to start/stop an Engine) to include a valid
StateManager (e.g., a mock or test implementation) assigned to the StateManager
field so e.stateManager is non-nil during Start/Stop calls.

In `@client/internal/updatemanager/manager_test.go`:
- Line 59: Several tests in manager_test.go use a brittle 10*time.Millisecond
timeout when calling waitForUpdateEvent; increase these timeouts (calls to
waitForUpdateEvent in the test functions that currently pass
10*time.Millisecond) to a larger, CI-friendly value (e.g., 200*time.Millisecond
or 500*time.Millisecond) to stabilize async event waits—update each occurrence
(the calls at ver, _ := waitForUpdateEvent(sub, 10*time.Millisecond) and the
other instances) so they all use the new longer duration.

In `@client/internal/updatemanager/supported_darwin.go`:
- Line 14: Change the spelling in the log message used in supported_darwin.go:
update the call to log.Warnf that currently logs "auto-update disabled on Home
Brew installation" so it reads "auto-update disabled on Homebrew installation"
(leave the surrounding code and formatting of the log.Warnf call unchanged).

---

Duplicate comments:
In `@client/internal/updatemanager/manager_linux_test.go`:
- Line 61: The test uses a too-short async wait timeout (10*time.Millisecond)
when calling waitForUpdateEvent in manager_linux_test.go which causes flaky CI;
update the timeout at each call site (the three occurrences where
waitForUpdateEvent is invoked) to a larger value such as 100*time.Millisecond or
500*time.Millisecond to allow async event delivery under load—locate the calls
to waitForUpdateEvent in the test file and replace 10*time.Millisecond with the
chosen increased duration.

In `@client/internal/updatemanager/manager_test_helpers_test.go`:
- Around line 42-44: The test helper currently treats the presence of the
"enforced" key as a boolean (using blank identifier) which will treat "false" as
true; update the logic in the closure inside manager_test_helpers_test.go (the
block returning val, enforced) to read the string value for
event.Metadata["enforced"], use strconv.ParseBool to parse it (handle parse
error by treating as false), and set the enforced return value to the parsed
boolean instead of using key-presence; ensure you import strconv if not already
present.

In `@client/internal/updatemanager/manager.go`:
- Around line 141-147: When changing mode in Manager methods (e.g.,
SetDownloadOnly and the other mode-mutating routines referenced), ensure you
clear pendingVersion and reset any enforcement state (so no stale pendingVersion
or partial enforcement remains) and perform semver parsing/validation before
mutating mode fields; do all mutations atomically under updateMutex so parse
failures do not leave a partially-applied policy. Concretely: acquire
m.updateMutex, parse/validate the provided version (or determine empty-version
fallback) first, on parse error do not change mode fields but still clear
pendingVersion and reset enforcement flags, and only after successful validation
assign downloadOnly/forceUpdate/updateToLatest/expectedVersion together; apply
the same fix to the other mode-setting functions mentioned (the blocks around
the second mode change and the fallback handling and the code at the later site
referenced).
- Around line 200-214: The Install path can call m.install concurrently with
other entry points (e.g., the forced-update/RunInstallation path), risking
parallel non-idempotent installs; add a dedicated mutex (e.g., m.installMutex
sync.Mutex) to Manager and acquire it around the actual install invocation in
Manager.Install (wrap the call to m.install(ctx, pending) with
m.installMutex.Lock()/Unlock()) and do the same in the other callers that
directly call m.install (the forced-update/RunInstallation code paths referenced
in the review) so all calls to m.install are serialized; ensure the new mutex is
added to the Manager struct and used consistently in every place that invokes
m.install.

In `@client/server/event.go`:
- Line 17: Concurrent calls to SubscribeEvents can race when invoking
startUpdateManagerForGUI(), bypassing Manager.Start()'s internal m.cancel guard;
protect the GUI update manager initialization by adding a sync.Once (or a
mutex-protected boolean) around startUpdateManagerForGUI() so it only runs once
across goroutines. Locate the SubscribeEvents callers and replace the direct
call to startUpdateManagerForGUI() with a guarded invocation (e.g.,
guiUpdateOnce.Do(startUpdateManagerForGUI) or acquire a mutex, check an
initialized flag, call startUpdateManagerForGUI(), set the flag, and release),
and ensure Manager.Start() still uses m.cancel but no longer relies on it for
once-only init.

In `@client/server/server.go`:
- Around line 142-146: When initializing s.updateManager in the block that
creates stateMgr via statemanager.New and assigns s.updateManager =
updatemanager.NewManager(s.statusRecorder, stateMgr), after calling
s.updateManager.CheckUpdateSuccess(s.rootCtx) also call
s.updateManager.Start(...) to start the daemon-owned update processing so it
runs for non-GUI flows; ensure you use the same root context (s.rootCtx) or an
appropriate context and any required parameters on updatemanager.Start, so the
manager is active before being passed into ConnectClient.

In `@client/ui/client_ui.go`:
- Around line 851-852: The call to s.conn.Up is using the struct context s.ctx
instead of the method-scoped context, so change the RPC invocation from
s.conn.Up(s.ctx, ...) to s.conn.Up(ctx, ...) (use the local ctx variable) so
that cancellation and timeouts of the current call are respected; update the
call site where s.conn.Up(...) is invoked and ensure any surrounding error
handling (the fmt.Errorf wrapping) remains unchanged.

In `@client/ui/event_handler.go`:
- Around line 214-246: handleUpdateClick performs a synchronous TriggerUpdate
RPC using h.client.ctx on the tray event loop which can freeze the UI if the
daemon is unresponsive; change it to run the RPC off the event loop with a
bounded context/timeout and do not block the caller: create a new context with
timeout (e.g., context.WithTimeout) inside handleUpdateClick, use that context
for conn.TriggerUpdate instead of h.client.ctx, run the RPC call in a goroutine
if needed to avoid blocking the select loop, and ensure errors/timeouts fall
back to opening version.DownloadUrl() (references: handleUpdateClick,
TriggerUpdate, h.client.ctx, getSrvClient).

---

Nitpick comments:
In `@client/proto/daemon.proto`:
- Around line 230-233: The removed autoUpdate field number (3) must be reserved
in the UpRequest message to prevent future reuse; update the proto by adding a
reserved declaration inside the UpRequest message (e.g., reserve field number 3
and the name "autoUpdate") so the message UpRequest explicitly reserves 3 and
the removed field name.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9a6a72e and e7ec0d8.

⛔ Files ignored due to path filters (2)
  • client/proto/daemon.pb.go is excluded by !**/*.pb.go
  • client/proto/daemon_grpc.pb.go is excluded by !**/*.pb.go
📒 Files selected for processing (25)
  • client/android/client.go
  • client/cmd/up.go
  • client/embed/embed.go
  • client/internal/connect.go
  • client/internal/engine.go
  • client/internal/engine_test.go
  • client/internal/updatemanager/manager.go
  • client/internal/updatemanager/manager_linux_test.go
  • client/internal/updatemanager/manager_test.go
  • client/internal/updatemanager/manager_test_helpers_test.go
  • client/internal/updatemanager/manager_unsupported.go
  • client/internal/updatemanager/supported_darwin.go
  • client/internal/updatemanager/supported_other.go
  • client/internal/updatemanager/supported_windows.go
  • client/ios/NetBirdSDK/client.go
  • client/proto/daemon.proto
  • client/server/event.go
  • client/server/server.go
  • client/server/server_test.go
  • client/server/triggerupdate.go
  • client/ui/client_ui.go
  • client/ui/event/event.go
  • client/ui/event_handler.go
  • client/ui/profile.go
  • client/ui/quickactions.go
💤 Files with no reviewable changes (1)
  • client/internal/updatemanager/manager_unsupported.go

Comment thread client/internal/engine_test.go
Comment thread client/internal/updatemanager/manager_test.go Outdated
Comment thread client/internal/updatemanager/supported_darwin.go Outdated
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
client/proto/daemon.proto (1)

230-233: ⚠️ Potential issue | 🟡 Minor

Reserve the removed autoUpdate field number in UpRequest.

The optional bool autoUpdate = 3 field was deleted but field number 3 is not reserved. Any client still carrying the old proto will silently marshal/unmarshal data into whatever field 3 becomes in the future, and old servers may misread new field 3 payloads as autoUpdate.

🛡️ Proposed fix
 message UpRequest {
   optional string profileName = 1;
   optional string username = 2;
+  reserved 3;
+  reserved "autoUpdate";
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/proto/daemon.proto` around lines 230 - 233, The UpRequest message
removed the optional bool autoUpdate = 3 but did not reserve field number 3;
update the proto by adding a reserved field number to the UpRequest message
(reserve 3) so future fields cannot reuse that tag and old clients/servers won't
misinterpret legacy autoUpdate data—modify the UpRequest message declaration
(message UpRequest) to include a reservation for field number 3.
client/internal/updatemanager/manager.go (1)

174-184: ⚠️ Potential issue | 🟡 Minor

Apply mode flags only after explicit-version parsing succeeds.

A malformed explicit version currently leaves the manager out of download-only mode with no valid update target.

Suggested diff
-	m.downloadOnly = false
-	m.forceUpdate = forceUpdate
-
 	if expectedVersion == latestVersion {
+		m.downloadOnly = false
+		m.forceUpdate = forceUpdate
 		m.updateToLatestVersion = true
 		m.expectedVersion = nil
 	} else {
 		expectedSemVer, err := v.NewVersion(expectedVersion)
 		if err != nil {
 			log.Errorf("error parsing version: %v", err)
+			m.downloadOnly = true
+			m.forceUpdate = false
+			m.expectedVersion = nil
+			m.updateToLatestVersion = false
 			return
 		}
+		m.downloadOnly = false
+		m.forceUpdate = forceUpdate
 		if m.expectedVersion != nil && m.expectedVersion.Equal(expectedSemVer) {
 			return
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updatemanager/manager.go` around lines 174 - 184, Currently
m.downloadOnly and m.forceUpdate are set before attempting to parse the explicit
version (v.NewVersion), which means a malformed version can leave the manager
out of download-only mode with no valid expectedVersion; move the assignments so
that m.downloadOnly = false and m.forceUpdate = forceUpdate are applied only
after v.NewVersion succeeds (and after the code that sets
m.updateToLatestVersion and m.expectedVersion), and ensure that on parse error
you return without changing m.downloadOnly/m.forceUpdate or that you explicitly
restore them to their previous values so the manager remains in a consistent
download-only state.
♻️ Duplicate comments (11)
client/internal/updatemanager/supported_darwin.go (1)

11-17: ⚠️ Potential issue | 🟡 Minor

Minor: "Home Brew" → "Homebrew".

The canonical spelling is "Homebrew" (one word, lowercase 'b').

-		log.Warnf("auto-update disabled on Home Brew installation")
+		log.Warnf("auto-update disabled on Homebrew installation")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updatemanager/supported_darwin.go` around lines 11 - 17, The
log message in isAutoUpdateSupported() uses the incorrect spelling "Home Brew";
update the string passed to log.Warnf in that function (the call to
log.Warnf("auto-update disabled on Home Brew installation")) to use the
canonical "Homebrew" spelling so the message reads "auto-update disabled on
Homebrew installation".
client/server/event.go (1)

17-17: ⚠️ Potential issue | 🟠 Major

Race condition on concurrent SubscribeEvents calls.

startUpdateManagerForGUI() is invoked on every SubscribeEvents call without synchronization. If the GUI reconnects or multiple subscribers exist, concurrent calls can race on the update manager's initialization. Consider guarding with a sync.Once or a mutex-protected flag.

#!/bin/bash
# Check if startUpdateManagerForGUI has any synchronization guards
echo "== startUpdateManagerForGUI implementation =="
rg -nP 'func\s+\(s\s+\*Server\)\s+startUpdateManagerForGUI' --type go -A 30

echo ""
echo "== sync.Once or mutex guards near startUpdateManagerForGUI =="
rg -nP 'sync\.Once|updateManagerOnce|guiOnce' client/server/ --type go -C 3
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/server/event.go` at line 17, SubscribeEvents calls
startUpdateManagerForGUI on each subscription, causing a race when multiple
subscribers or reconnects occur; add a guard so the update manager is
initialized exactly once. Modify the Server struct to include a sync.Once (e.g.,
updateManagerOnce) or a mutex+bool flag and change SubscribeEvents to call
updateManagerOnce.Do(s.startUpdateManagerForGUI) (or lock, check flag, call
startUpdateManagerForGUI, set flag, unlock) instead of invoking
startUpdateManagerForGUI unconditionally to prevent concurrent initialization
races.
client/ui/event_handler.go (1)

214-246: ⚠️ Potential issue | 🟠 Major

Synchronous RPC on the tray event loop can stall the entire UI menu.

listen() processes all menu clicks sequentially in a single goroutine. The enforced-update path at lines 226–243 performs a blocking getSrvClient + TriggerUpdate RPC. If the daemon is slow or unreachable, every other menu item (connect, disconnect, quit, etc.) is frozen until the call completes or the context is cancelled.

Wrap the enforced-update path in a goroutine and add a timeout to the TriggerUpdate call:

Proposed fix
 func (h *eventHandler) handleUpdateClick() {
 	h.client.updateIndicationLock.Lock()
 	enforced := h.client.isEnforcedUpdate
 	h.client.updateIndicationLock.Unlock()

 	if !enforced {
 		if err := openURL(version.DownloadUrl()); err != nil {
 			log.Errorf("failed to open download URL: %v", err)
 		}
 		return
 	}

+	go func() {
 		conn, err := h.client.getSrvClient(defaultFailTimeout)
 		if err != nil {
 			log.Errorf("failed to get service client for update: %v", err)
 			_ = openURL(version.DownloadUrl())
 			return
 		}

-		resp, err := conn.TriggerUpdate(h.client.ctx, &proto.TriggerUpdateRequest{})
+		rpcCtx, cancel := context.WithTimeout(h.client.ctx, defaultFailTimeout)
+		defer cancel()
+
+		resp, err := conn.TriggerUpdate(rpcCtx, &proto.TriggerUpdateRequest{})
 		if err != nil {
 			log.Errorf("TriggerUpdate failed: %v", err)
 			_ = openURL(version.DownloadUrl())
 			return
 		}
 		if !resp.Success {
 			log.Errorf("TriggerUpdate failed: %s", resp.ErrorMsg)
 			_ = openURL(version.DownloadUrl())
 			return
 		}

 		log.Infof("update triggered via daemon")
+	}()
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/ui/event_handler.go` around lines 214 - 246, The enforced-update
branch in handleUpdateClick performs blocking getSrvClient and TriggerUpdate
calls on the UI event loop; move the entire enforced-update sequence into a new
goroutine so menu handling is non-blocking, and when calling TriggerUpdate use a
context with a short timeout (derive from h.client.ctx with context.WithTimeout)
so the RPC can't hang indefinitely; keep error logging and the fallback
openURL(version.DownloadUrl()) behavior inside the goroutine, ensure you cancel
the timeout context, and continue to use
getSrvClient/TriggerUpdate/openURL/resp.Success checks as before.
client/android/client.go (1)

127-128: LGTM — both NewConnectClient call sites correctly updated to the 3-argument form.

The previously flagged 4-argument call at Line 127 is now resolved, and Line 160 was already updated.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/android/client.go` around lines 127 - 128, No change required: both
NewConnectClient call sites are already updated to the 3-argument constructor
form; verify that internal.NewConnectClient(ctx, cfg, c.recorder) is used at the
client.android site and that the returned connectClient.RunOnAndroid invocation
still passes the existing arguments (c.tunAdapter, c.iFaceDiscover,
c.networkChangeListener, slices.Clone(dns.items), dnsReadyListener, stateFile);
if any 4-argument NewConnectClient remains, replace it with the 3-arg form and
ensure callers use the returned client's RunOnAndroid method.
client/internal/updatemanager/manager_linux_test.go (1)

61-61: ⚠️ Potential issue | 🟡 Minor

Event wait timeout of 10ms is still too tight for reliable CI runs.

All three waitForUpdateEvent calls (Lines 61, 76, 105) use 10*time.Millisecond, which was flagged previously and remains unchanged.

⏱️ Proposed stabilization
+const eventWaitTimeout = 300 * time.Millisecond
 ...
-		ver, enforced := waitForUpdateEvent(sub, 10*time.Millisecond)
+		ver, enforced := waitForUpdateEvent(sub, eventWaitTimeout)
 ...
-		ver, enforced = waitForUpdateEvent(sub, 10*time.Millisecond)
+		ver, enforced = waitForUpdateEvent(sub, eventWaitTimeout)

Also applies to: 76-76, 105-105

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updatemanager/manager_linux_test.go` at line 61, The 10ms
timeout passed to waitForUpdateEvent is too small for CI; update all calls to
waitForUpdateEvent in manager_linux_test.go (the three usages that assign to
ver, enforced and the other two calls) to use a larger, stable duration (e.g.
100*time.Millisecond or 250*time.Millisecond) or introduce a shared constant
like testWait = 100*time.Millisecond and replace the literal 10*time.Millisecond
with that constant; keep the call signature unchanged and run tests to confirm
stability.
client/internal/updatemanager/manager_test_helpers_test.go (1)

42-44: ⚠️ Potential issue | 🟡 Minor

enforced key-presence check still treats "false" as true.

Line 43 uses map-key existence (_, enforced := event.Metadata["enforced"]) rather than parsing the value, so a stored "false" string incorrectly yields enforced = true. This was raised previously and remains unresolved.

✅ Proposed fix
+import "strconv"
 ...
 		if val, ok := event.Metadata["new_version_available"]; ok {
-			_, enforced := event.Metadata["enforced"]
-			return val, enforced
+			enforced := false
+			if raw, ok := event.Metadata["enforced"]; ok {
+				if parsed, err := strconv.ParseBool(raw); err == nil {
+					enforced = parsed
+				}
+			}
+			return val, enforced
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updatemanager/manager_test_helpers_test.go` around lines 42 -
44, The metadata key check currently uses map-key existence (_, enforced :=
event.Metadata["enforced"]) which treats any stored string (including "false")
as true; update the logic that reads event.Metadata["enforced"] to parse the
string value (e.g., v, ok := event.Metadata["enforced"]; if ok { enforced =
strings.EqualFold(v, "true") } else { enforced = false }) so only the literal
"true" (case-insensitive) yields true; ensure you import strings and update the
code path that returns (val, enforced) so enforced reflects the parsed boolean
rather than key presence.
client/internal/updatemanager/manager_test.go (1)

59-59: ⚠️ Potential issue | 🟠 Major

Increase async event wait timeout in tests.

10*time.Millisecond is too brittle for CI and is already causing failing runs.

Suggested diff
+const eventWaitTimeout = 300 * time.Millisecond
...
-		ver, _ := waitForUpdateEvent(sub, 10*time.Millisecond)
+		ver, _ := waitForUpdateEvent(sub, eventWaitTimeout)
...
-		ver, _ = waitForUpdateEvent(sub, 10*time.Millisecond)
+		ver, _ = waitForUpdateEvent(sub, eventWaitTimeout)
...
-		ver, _ := waitForUpdateEvent(sub, 10*time.Millisecond)
+		ver, _ := waitForUpdateEvent(sub, eventWaitTimeout)
...
-		_, enforced := waitForUpdateEvent(sub, 10*time.Millisecond)
+		_, enforced := waitForUpdateEvent(sub, eventWaitTimeout)
...
-		ver, enforced2 := waitForUpdateEvent(sub2, 10*time.Millisecond)
+		ver, enforced2 := waitForUpdateEvent(sub2, eventWaitTimeout)

Also applies to: 71-71, 161-161, 192-192, 210-210

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updatemanager/manager_test.go` at line 59, The test uses a
brittle 10*time.Millisecond timeout when calling waitForUpdateEvent; update all
test calls to use a larger, CI-friendly duration (e.g., 250*time.Millisecond or
1*time.Second) to avoid flakes. Search for waitForUpdateEvent(sub,
10*time.Millisecond) in manager_test.go (the occurrences around the test helper
calls) and replace the 10*time.Millisecond literal with the chosen larger
duration at each call site so all async waits use the new timeout.
client/ui/client_ui.go (1)

851-852: ⚠️ Potential issue | 🟡 Minor

Use the method-scoped gRPC client and context for Up.

This still calls s.conn.Up(s.ctx, ...) instead of conn.Up(ctx, ...), so request-scoped cancellation/timeout is ignored.

Suggested diff
-	if _, err := s.conn.Up(s.ctx, &proto.UpRequest{}); err != nil {
+	if _, err := conn.Up(ctx, &proto.UpRequest{}); err != nil {
 		return fmt.Errorf("start connection: %w", err)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/ui/client_ui.go` around lines 851 - 852, The call uses the
struct-level context and client (s.conn.Up(s.ctx, ...)) which ignores the
request-scoped context and client; change the call to use the method-scoped
variables (conn.Up(ctx, &proto.UpRequest{}) ) and pass ctx instead of s.ctx so
the per-request cancellation/timeout is honored—update the invocation of Up to
use conn and ctx rather than s.conn and s.ctx.
client/internal/engine_test.go (1)

263-268: ⚠️ Potential issue | 🟠 Major

Populate EngineServices.StateManager in test engine construction paths.

These test setups still omit StateManager; lifecycle paths that touch e.stateManager can panic when tests call Start()/Stop().

Suggested fix pattern (apply to each listed EngineServices block)
+import "github.com/netbirdio/netbird/client/internal/statemanager"
...
 	EngineServices{
 		SignalClient:   &signal.MockClient{},
 		MgmClient:      &mgmt.MockClient{},
 		RelayManager:   relayMgr,
 		StatusRecorder: peer.NewRecorder("https://mgm"),
+		StateManager:   statemanager.New(fmt.Sprintf("%s/state.json", t.TempDir())),
 	},

Also applies to: 437-442, 661-666, 831-836, 1038-1043, 1569-1574

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/engine_test.go` around lines 263 - 268, Test EngineServices
initializers omit StateManager causing e.stateManager panics when tests call
Start()/Stop(); update every EngineServices{ ... } test construction (the blocks
that currently set SignalClient, MgmClient, RelayManager, StatusRecorder) to
also set StateManager to a test implementation (for example a provided mock like
&state.MockManager{} or an in-memory/test helper such as state.NewTestManager())
that implements the StateManager interface; if no mock exists, add a minimal
test stub implementing StateManager and use it in those EngineServices
initializers so e.stateManager is non-nil during Start()/Stop() paths.
client/internal/updatemanager/manager.go (2)

141-147: ⚠️ Potential issue | 🟠 Major

Clear stale pending/enforcement state when update policy changes.

pendingVersion/forceUpdate can survive mode transitions, so Install() may act on an outdated pending target after policy flips.

Suggested diff
 func (m *Manager) SetDownloadOnly() {
 	m.updateMutex.Lock()
 	m.downloadOnly = true
 	m.forceUpdate = false
 	m.expectedVersion = nil
 	m.updateToLatestVersion = false
+	m.pendingVersion = nil
 	m.updateMutex.Unlock()
@@
 	if expectedVersion == "" {
 		log.Errorf("empty expected version provided")
 		m.expectedVersion = nil
 		m.updateToLatestVersion = false
 		m.downloadOnly = true
+		m.forceUpdate = false
+		m.pendingVersion = nil
 		return
 	}
@@
 	if !downloadOnly && !forceUpdate {
 		m.pendingVersion = updateVersion
+	} else {
+		m.pendingVersion = nil
 	}

Also applies to: 166-171, 331-333

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updatemanager/manager.go` around lines 141 - 147, When
switching update policies (e.g., in SetDownloadOnly and the other policy-mutator
methods at the indicated locations), clear any stale pending/enforcement state
so Install() won't act on an old target: inside the updateMutex-protected
section, in addition to setting
downloadOnly/forceUpdate/updateToLatestVersion/expectedVersion, also set
m.pendingVersion = nil, m.pendingEnforcement = nil (or the equivalent fields
that track pending targets/enforcements) and reset any related flags (e.g.,
forceUpdate) so no leftover pending state survives the policy transition.

200-214: ⚠️ Potential issue | 🔴 Critical

Serialize installer execution across manual and forced-update paths.

Install() and forced-update flow can call install() concurrently. This risks double-running installer side effects.

Suggested diff
 type Manager struct {
@@
 	updateMutex sync.Mutex
+	installMu   sync.Mutex
 }
@@
 func (m *Manager) Install(ctx context.Context) error {
@@
-	return m.install(ctx, pending)
+	return m.installSingleFlight(ctx, pending)
 }
+
+func (m *Manager) installSingleFlight(ctx context.Context, target *v.Version) error {
+	m.installMu.Lock()
+	defer m.installMu.Unlock()
+	return m.install(ctx, target)
+}
@@
 	if forceUpdate {
-		if err := m.install(ctx, updateVersion); err != nil {
+		if err := m.installSingleFlight(ctx, updateVersion); err != nil {
 			log.Errorf("force update failed: %v", err)
 		}
 		return
 	}

Also applies to: 347-351, 363-404

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updatemanager/manager.go` around lines 200 - 214, Concurrent
calls to Manager.install via Install() and the forced-update path can run the
installer twice; serialize installer execution by acquiring a shared install
lock before checking pending and calling install. Modify Install() (and ensure
the forced-update flow that calls install() also uses the same guard) to either
hold m.updateMutex (or introduce a dedicated m.installMutex) around the
pendingVersion check and the subsequent call to m.install(ctx, pending) so the
mutex remains held until m.install returns; reference Manager, Install, install,
updateMutex and pendingVersion when making the change so both manual and
forced-update callers use the same lock.
🧹 Nitpick comments (2)
client/server/server.go (1)

219-221: Use method parameters consistently in the no-retry branch.

This path reads s.config/s.statusRecorder even though profileConfig/statusRecorder are already passed in. Using the parameters avoids hidden coupling to mutable server state.

Suggested diff
-	if s.config.DisableAutoConnect {
-		if err := s.connect(ctx, s.config, s.statusRecorder, runningChan); err != nil {
+	if profileConfig.DisableAutoConnect {
+		if err := s.connect(ctx, profileConfig, statusRecorder, runningChan); err != nil {
 			log.Debugf("run client connection exited with error: %v", err)
 		}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/server/server.go` around lines 219 - 221, The no-retry branch uses
server fields instead of the function parameters; change the call inside the
DisableAutoConnect branch to pass the provided profileConfig and statusRecorder
(and keep ctx and runningChan) rather than s.config and s.statusRecorder so the
code uses the method parameters consistently — update the s.connect invocation
to s.connect(ctx, profileConfig, statusRecorder, runningChan) (ensure names
match the surrounding function signature).
client/ui/client_ui.go (1)

1133-1135: Parse enforced from metadata value, not just key presence.

Using key existence alone will treat "enforced":"false" as enforced. Parse the value to bool for safer event handling.

Suggested diff
 	if newVersion, ok := event.Metadata["new_version_available"]; ok {
-		_, enforced := event.Metadata["enforced"]
+		enforced, _ := strconv.ParseBool(event.Metadata["enforced"])
 		log.Infof("received new_version_available event: version=%s enforced=%v", newVersion, enforced)
 		s.onUpdateAvailable(newVersion, enforced)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/ui/client_ui.go` around lines 1133 - 1135, The code currently treats
enforced by presence only; change the handling in the event.Metadata block so
that instead of using "_, enforced := event.Metadata[\"enforced\"]" you read the
metadata value (e.g., val, ok := event.Metadata["enforced"]) and parse it to a
boolean (using strconv.ParseBool or explicit val == "true") into the enforced
variable, defaulting to false on missing/parse error, then pass that boolean
into log.Infof (references: event.Metadata, newVersion, enforced, log.Infof);
also add the strconv import to the file.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@client/internal/engine.go`:
- Around line 317-319: Engine.Stop currently mutates updater policy by calling
e.updateManager.SetDownloadOnly(), which resets auto-update state during
teardown; remove that call (or at minimum guard it so it only runs when an
explicit policy-reset intent is present) so Stop no longer changes updateManager
state implicitly. Locate the Engine.Stop method and delete the
e.updateManager.SetDownloadOnly() invocation (or replace it with a conditional
that checks an explicit reset flag passed into Stop) so auto-update settings are
preserved when autoUpdateSettings == nil.

---

Outside diff comments:
In `@client/internal/updatemanager/manager.go`:
- Around line 174-184: Currently m.downloadOnly and m.forceUpdate are set before
attempting to parse the explicit version (v.NewVersion), which means a malformed
version can leave the manager out of download-only mode with no valid
expectedVersion; move the assignments so that m.downloadOnly = false and
m.forceUpdate = forceUpdate are applied only after v.NewVersion succeeds (and
after the code that sets m.updateToLatestVersion and m.expectedVersion), and
ensure that on parse error you return without changing
m.downloadOnly/m.forceUpdate or that you explicitly restore them to their
previous values so the manager remains in a consistent download-only state.

In `@client/proto/daemon.proto`:
- Around line 230-233: The UpRequest message removed the optional bool
autoUpdate = 3 but did not reserve field number 3; update the proto by adding a
reserved field number to the UpRequest message (reserve 3) so future fields
cannot reuse that tag and old clients/servers won't misinterpret legacy
autoUpdate data—modify the UpRequest message declaration (message UpRequest) to
include a reservation for field number 3.

---

Duplicate comments:
In `@client/android/client.go`:
- Around line 127-128: No change required: both NewConnectClient call sites are
already updated to the 3-argument constructor form; verify that
internal.NewConnectClient(ctx, cfg, c.recorder) is used at the client.android
site and that the returned connectClient.RunOnAndroid invocation still passes
the existing arguments (c.tunAdapter, c.iFaceDiscover, c.networkChangeListener,
slices.Clone(dns.items), dnsReadyListener, stateFile); if any 4-argument
NewConnectClient remains, replace it with the 3-arg form and ensure callers use
the returned client's RunOnAndroid method.

In `@client/internal/engine_test.go`:
- Around line 263-268: Test EngineServices initializers omit StateManager
causing e.stateManager panics when tests call Start()/Stop(); update every
EngineServices{ ... } test construction (the blocks that currently set
SignalClient, MgmClient, RelayManager, StatusRecorder) to also set StateManager
to a test implementation (for example a provided mock like &state.MockManager{}
or an in-memory/test helper such as state.NewTestManager()) that implements the
StateManager interface; if no mock exists, add a minimal test stub implementing
StateManager and use it in those EngineServices initializers so e.stateManager
is non-nil during Start()/Stop() paths.

In `@client/internal/updatemanager/manager_linux_test.go`:
- Line 61: The 10ms timeout passed to waitForUpdateEvent is too small for CI;
update all calls to waitForUpdateEvent in manager_linux_test.go (the three
usages that assign to ver, enforced and the other two calls) to use a larger,
stable duration (e.g. 100*time.Millisecond or 250*time.Millisecond) or introduce
a shared constant like testWait = 100*time.Millisecond and replace the literal
10*time.Millisecond with that constant; keep the call signature unchanged and
run tests to confirm stability.

In `@client/internal/updatemanager/manager_test_helpers_test.go`:
- Around line 42-44: The metadata key check currently uses map-key existence (_,
enforced := event.Metadata["enforced"]) which treats any stored string
(including "false") as true; update the logic that reads
event.Metadata["enforced"] to parse the string value (e.g., v, ok :=
event.Metadata["enforced"]; if ok { enforced = strings.EqualFold(v, "true") }
else { enforced = false }) so only the literal "true" (case-insensitive) yields
true; ensure you import strings and update the code path that returns (val,
enforced) so enforced reflects the parsed boolean rather than key presence.

In `@client/internal/updatemanager/manager_test.go`:
- Line 59: The test uses a brittle 10*time.Millisecond timeout when calling
waitForUpdateEvent; update all test calls to use a larger, CI-friendly duration
(e.g., 250*time.Millisecond or 1*time.Second) to avoid flakes. Search for
waitForUpdateEvent(sub, 10*time.Millisecond) in manager_test.go (the occurrences
around the test helper calls) and replace the 10*time.Millisecond literal with
the chosen larger duration at each call site so all async waits use the new
timeout.

In `@client/internal/updatemanager/manager.go`:
- Around line 141-147: When switching update policies (e.g., in SetDownloadOnly
and the other policy-mutator methods at the indicated locations), clear any
stale pending/enforcement state so Install() won't act on an old target: inside
the updateMutex-protected section, in addition to setting
downloadOnly/forceUpdate/updateToLatestVersion/expectedVersion, also set
m.pendingVersion = nil, m.pendingEnforcement = nil (or the equivalent fields
that track pending targets/enforcements) and reset any related flags (e.g.,
forceUpdate) so no leftover pending state survives the policy transition.
- Around line 200-214: Concurrent calls to Manager.install via Install() and the
forced-update path can run the installer twice; serialize installer execution by
acquiring a shared install lock before checking pending and calling install.
Modify Install() (and ensure the forced-update flow that calls install() also
uses the same guard) to either hold m.updateMutex (or introduce a dedicated
m.installMutex) around the pendingVersion check and the subsequent call to
m.install(ctx, pending) so the mutex remains held until m.install returns;
reference Manager, Install, install, updateMutex and pendingVersion when making
the change so both manual and forced-update callers use the same lock.

In `@client/internal/updatemanager/supported_darwin.go`:
- Around line 11-17: The log message in isAutoUpdateSupported() uses the
incorrect spelling "Home Brew"; update the string passed to log.Warnf in that
function (the call to log.Warnf("auto-update disabled on Home Brew
installation")) to use the canonical "Homebrew" spelling so the message reads
"auto-update disabled on Homebrew installation".

In `@client/server/event.go`:
- Line 17: SubscribeEvents calls startUpdateManagerForGUI on each subscription,
causing a race when multiple subscribers or reconnects occur; add a guard so the
update manager is initialized exactly once. Modify the Server struct to include
a sync.Once (e.g., updateManagerOnce) or a mutex+bool flag and change
SubscribeEvents to call updateManagerOnce.Do(s.startUpdateManagerForGUI) (or
lock, check flag, call startUpdateManagerForGUI, set flag, unlock) instead of
invoking startUpdateManagerForGUI unconditionally to prevent concurrent
initialization races.

In `@client/ui/client_ui.go`:
- Around line 851-852: The call uses the struct-level context and client
(s.conn.Up(s.ctx, ...)) which ignores the request-scoped context and client;
change the call to use the method-scoped variables (conn.Up(ctx,
&proto.UpRequest{}) ) and pass ctx instead of s.ctx so the per-request
cancellation/timeout is honored—update the invocation of Up to use conn and ctx
rather than s.conn and s.ctx.

In `@client/ui/event_handler.go`:
- Around line 214-246: The enforced-update branch in handleUpdateClick performs
blocking getSrvClient and TriggerUpdate calls on the UI event loop; move the
entire enforced-update sequence into a new goroutine so menu handling is
non-blocking, and when calling TriggerUpdate use a context with a short timeout
(derive from h.client.ctx with context.WithTimeout) so the RPC can't hang
indefinitely; keep error logging and the fallback openURL(version.DownloadUrl())
behavior inside the goroutine, ensure you cancel the timeout context, and
continue to use getSrvClient/TriggerUpdate/openURL/resp.Success checks as
before.

---

Nitpick comments:
In `@client/server/server.go`:
- Around line 219-221: The no-retry branch uses server fields instead of the
function parameters; change the call inside the DisableAutoConnect branch to
pass the provided profileConfig and statusRecorder (and keep ctx and
runningChan) rather than s.config and s.statusRecorder so the code uses the
method parameters consistently — update the s.connect invocation to
s.connect(ctx, profileConfig, statusRecorder, runningChan) (ensure names match
the surrounding function signature).

In `@client/ui/client_ui.go`:
- Around line 1133-1135: The code currently treats enforced by presence only;
change the handling in the event.Metadata block so that instead of using "_,
enforced := event.Metadata[\"enforced\"]" you read the metadata value (e.g.,
val, ok := event.Metadata["enforced"]) and parse it to a boolean (using
strconv.ParseBool or explicit val == "true") into the enforced variable,
defaulting to false on missing/parse error, then pass that boolean into
log.Infof (references: event.Metadata, newVersion, enforced, log.Infof); also
add the strconv import to the file.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 9a6a72e and e7ec0d8.

⛔ Files ignored due to path filters (2)
  • client/proto/daemon.pb.go is excluded by !**/*.pb.go
  • client/proto/daemon_grpc.pb.go is excluded by !**/*.pb.go
📒 Files selected for processing (25)
  • client/android/client.go
  • client/cmd/up.go
  • client/embed/embed.go
  • client/internal/connect.go
  • client/internal/engine.go
  • client/internal/engine_test.go
  • client/internal/updatemanager/manager.go
  • client/internal/updatemanager/manager_linux_test.go
  • client/internal/updatemanager/manager_test.go
  • client/internal/updatemanager/manager_test_helpers_test.go
  • client/internal/updatemanager/manager_unsupported.go
  • client/internal/updatemanager/supported_darwin.go
  • client/internal/updatemanager/supported_other.go
  • client/internal/updatemanager/supported_windows.go
  • client/ios/NetBirdSDK/client.go
  • client/proto/daemon.proto
  • client/server/event.go
  • client/server/server.go
  • client/server/server_test.go
  • client/server/triggerupdate.go
  • client/ui/client_ui.go
  • client/ui/event/event.go
  • client/ui/event_handler.go
  • client/ui/profile.go
  • client/ui/quickactions.go
💤 Files with no reviewable changes (1)
  • client/internal/updatemanager/manager_unsupported.go

Comment thread client/internal/engine.go
pappz and others added 4 commits February 26, 2026 10:03
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Wrap the update trigger logic in a goroutine to prevent UI hang due to potentially slow server response.
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
client/ui/event_handler.go (1)

235-244: ⚠️ Potential issue | 🟠 Major

Add a bounded timeout for TriggerUpdate RPC.

TriggerUpdate is now off the UI goroutine, but on Line 235 it still uses h.client.ctx directly. If the daemon stalls, this goroutine can block indefinitely and never reach fallback behavior.

💡 Suggested patch
 	go func() {
 		conn, err := h.client.getSrvClient(defaultFailTimeout)
 		if err != nil {
 			log.Errorf("failed to get service client for update: %v", err)
 			_ = openURL(version.DownloadUrl())
 			return
 		}
 
-		resp, err := conn.TriggerUpdate(h.client.ctx, &proto.TriggerUpdateRequest{})
+		rpcCtx, cancel := context.WithTimeout(h.client.ctx, defaultFailTimeout)
+		defer cancel()
+
+		resp, err := conn.TriggerUpdate(rpcCtx, &proto.TriggerUpdateRequest{})
 		if err != nil {
+			if errors.Is(err, context.Canceled) || status.Code(err) == codes.Canceled {
+				log.Debug("TriggerUpdate canceled")
+				return
+			}
 			log.Errorf("TriggerUpdate failed: %v", err)
 			_ = openURL(version.DownloadUrl())
 			return
 		}

Use this script to confirm the call site currently lacks a timeout and verify after patching:

#!/bin/bash
set -euo pipefail

rg -n -C4 'TriggerUpdate\(' client/ui/event_handler.go
rg -n -C3 'WithTimeout\(' client/ui/event_handler.go

Expected result after fix: the TriggerUpdate call appears near a context.WithTimeout(...) creation in client/ui/event_handler.go.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/ui/event_handler.go` around lines 235 - 244, The TriggerUpdate call
uses h.client.ctx directly and can block indefinitely; wrap the RPC in a short
bounded context by creating a context.WithTimeout (e.g., 5–10s), defer the
cancel, and pass that new ctx to conn.TriggerUpdate instead of h.client.ctx;
update error handling to treat context deadline/cancel as a failure path that
falls back to calling openURL(version.DownloadUrl()) (same as current error
branches) and ensure defer cancel is added to avoid leaks around the call in the
function containing conn.TriggerUpdate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@client/ui/event_handler.go`:
- Around line 235-244: The TriggerUpdate call uses h.client.ctx directly and can
block indefinitely; wrap the RPC in a short bounded context by creating a
context.WithTimeout (e.g., 5–10s), defer the cancel, and pass that new ctx to
conn.TriggerUpdate instead of h.client.ctx; update error handling to treat
context deadline/cancel as a failure path that falls back to calling
openURL(version.DownloadUrl()) (same as current error branches) and ensure defer
cancel is added to avoid leaks around the call in the function containing
conn.TriggerUpdate.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 5404e0e and b1ac1ad.

📒 Files selected for processing (3)
  • client/internal/updatemanager/manager.go
  • client/internal/updatemanager/manager_test_helpers_test.go
  • client/ui/event_handler.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • client/internal/updatemanager/manager_test_helpers_test.go

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@client/internal/updatemanager/manager_test.go`:
- Around line 192-195: The test currently only checks the returned enforced
boolean from waitForUpdateEvent(sub, 500*time.Millisecond) which can be false
when no event was emitted; update the assertion to first verify an event was
actually received before inspecting enforced. Change the call to capture both
the event and enforced (e.g., evt, enforced := waitForUpdateEvent(sub,
500*time.Millisecond)), assert that evt is non-nil (or fail the test) and only
then assert that enforced is false for Mode 1; reference waitForUpdateEvent and
the test's sub subscription variable to locate the change.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between b1ac1ad and 31b4c75.

📒 Files selected for processing (3)
  • client/internal/updatemanager/manager_linux_test.go
  • client/internal/updatemanager/manager_test.go
  • client/internal/updatemanager/supported_darwin.go

Comment thread client/internal/updatemanager/manager_test.go Outdated
pappz and others added 2 commits February 26, 2026 10:29
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
@pappz pappz requested review from lixmal and mlsmaycon March 3, 2026 17:55
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
client/internal/updater/doc.go (1)

1-24: ⚠️ Potential issue | 🟡 Minor

Clarify lifecycle wording to distinguish GUI vs daemon behavior.

The overview currently reads as always-on background monitoring, but updater start is intentionally GUI-flow scoped. Please clarify this to avoid implying daemon-mode auto-start.

Based on learnings: updater Start() is deliberately invoked only via GUI flow (startUpdateManagerForGUI()), while daemon Start() only runs CheckUpdateSuccess().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updater/doc.go` around lines 1 - 24, Update the package
documentation to clarify lifecycle: change wording that implies always-on
background monitoring to explicitly state that updater.Start() is invoked only
in the GUI flow via startUpdateManagerForGUI(), while the daemon path only calls
CheckUpdateSuccess() (i.e., daemon does not run continuous monitoring).
Reference the Start(), startUpdateManagerForGUI(), and CheckUpdateSuccess()
symbols and update the Overview and Update Flow sections so steps describing
background monitoring and continuous operation are conditioned on the
GUI-invoked Start() path rather than presented as daemon behavior.
client/internal/updater/manager.go (1)

179-205: ⚠️ Potential issue | 🟠 Major

SetVersion can silently ignore policy changes when the version string is unchanged.

At Line 196, returning on version equality skips forceUpdate transition handling (e.g., enforce → force or force → enforce for the same version). Also, the empty-version branch on Line 179 switches to download-only without resetting forceUpdate, which can leave stale policy state.

🔧 Suggested fix
 	if expectedVersion == "" {
 		log.Errorf("empty expected version provided")
 		m.expectedVersion = nil
 		m.updateToLatestVersion = false
 		m.downloadOnly = true
+		m.forceUpdate = false
 		return
 	}
@@
-		if m.expectedVersion != nil && m.expectedVersion.Equal(expectedSemVer) {
+		sameVersion := m.expectedVersion != nil && m.expectedVersion.Equal(expectedSemVer)
+		sameForce := m.forceUpdate == forceUpdate
+		if sameVersion && sameForce && !m.downloadOnly {
 			return
 		}
 		m.expectedVersion = expectedSemVer
 		m.updateToLatestVersion = false
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updater/manager.go` around lines 179 - 205, The SetVersion
logic can skip updating policy flags when the version string is unchanged and
leaves stale forceUpdate when expectedVersion is empty; modify the SetVersion
implementation so that: when expectedVersion == "" you also reset m.forceUpdate
= false (or set to the passed forceUpdate value) and set
downloadOnly/updateToLatestVersion consistently; and when parsing yields an
expectedSemVer equal to m.expectedVersion do NOT early-return — instead update
m.forceUpdate, m.downloadOnly and m.updateToLatestVersion according to the
incoming forceUpdate and latestVersion comparison so policy transitions (force ↔
enforce and downloadOnly) are always applied even if the semantic version object
is unchanged. Ensure these changes touch the SetVersion branch that checks
expectedVersion == "" and the equality check against expectedSemVer to avoid
skipping policy updates.
client/server/server.go (1)

1613-1622: ⚠️ Potential issue | 🟡 Minor

updateManager initialization in the Up() path is handled safely but shows inconsistency with Start() path.

The Up() method does not initialize updateManager before calling connectWithRetryRuns, meaning s.updateManager will be nil when SetUpdateManager(nil) is called on line 1616. However, the code handles this safely—ConnectClient checks if c.updateManager != nil before using it (line 192-193).

Consider one of these options for consistency:

  1. Add the same lazy initialization logic from Start() (lines 63-67) to the Up() path if updates should be available in both scenarios, or
  2. Document that updateManager is intentionally nil in the Up() path if updates are only needed for GUI flows via Start().
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/server/server.go` around lines 1613 - 1622, The Up() path currently
calls connectWithRetryRuns which leads to connect() invoking
s.connectClient.SetUpdateManager(s.updateManager) while s.updateManager is nil
(unlike the Start() path which lazily initializes updateManager), so either
replicate the lazy initialization from Start() into the Up() path or explicitly
document that updateManager is intentionally nil for Up(). Update the Up()
method to perform the same lazy init used in Start() (initialize s.updateManager
via the same constructor/logic used in Start()) before calling
connectWithRetryRuns if updates should be available, or add a comment in Up()
clarifying that s.updateManager is deliberately left nil and
SetUpdateManager(nil) is expected.
♻️ Duplicate comments (2)
client/proto/daemon.proto (1)

233-235: ⚠️ Potential issue | 🟠 Major

Reserve removed UpRequest field tag/name instead of leaving a comment.

The field is removed, but tag 3 and name autoUpdate are not actually reserved, which leaves room for accidental reuse and wire-compat bugs.

💡 Proposed fix
 message UpRequest {
   optional string profileName = 1;
   optional string username = 2;
-  // deprecated, reserve 3
-  // optional bool autoUpdate = 3;
+  reserved 3;
+  reserved "autoUpdate";
 }

Run this to verify current schema state:

#!/bin/bash
rg -n -C3 'message UpRequest|autoUpdate|reserved 3|reserved "autoUpdate"' client/proto/daemon.proto

Expected: no active reserved 3 / reserved "autoUpdate" entries in UpRequest before applying the fix.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/proto/daemon.proto` around lines 233 - 235, In the UpRequest protobuf
message, reserve the removed field tag and name to prevent accidental reuse by
adding reserved entries for the numeric tag 3 and the string "autoUpdate" (e.g.,
inside the message UpRequest add reserved 3; and reserved "autoUpdate";) so the
identifier autoUpdate and tag 3 cannot be reintroduced; update the UpRequest
message declaration accordingly and re-run the provided rg check to verify
reserved 3 and reserved "autoUpdate" are present.
client/ui/event_handler.go (1)

237-240: ⚠️ Potential issue | 🟠 Major

Add a deadline to TriggerUpdate to avoid leaving the update action disabled too long.

Line 237 uses h.client.ctx directly. If the RPC hangs, the deferred re-enable on Line 229 is delayed indefinitely.

⏱️ Suggested fix
-		resp, err := conn.TriggerUpdate(h.client.ctx, &proto.TriggerUpdateRequest{})
+		rpcCtx, cancel := context.WithTimeout(h.client.ctx, defaultFailTimeout)
+		defer cancel()
+		resp, err := conn.TriggerUpdate(rpcCtx, &proto.TriggerUpdateRequest{})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/ui/event_handler.go` around lines 237 - 240, The TriggerUpdate call
currently uses h.client.ctx directly and can hang, delaying the deferred
re-enable; fix by creating a short timeout context via ctx, cancel :=
context.WithTimeout(h.client.ctx, 5*time.Second) (or an appropriate timeout)
before calling conn.TriggerUpdate and pass that ctx to conn.TriggerUpdate,
ensure defer cancel() runs immediately after creation, and use the returned
response/error as before so the earlier deferred UI re-enable tied to this
handler is not blocked.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@client/internal/updater/supported_darwin.go`:
- Around line 11-13: isAutoUpdateSupported currently calls
installer.TypeOfInstaller with context.Background(), which can lead to an
unbounded exec.CommandContext call (pkgutil --pkg-info) and block indefinitely;
change the call site in isAutoUpdateSupported to create a cancellable timeout
context (e.g., context.WithTimeout) with a reasonable short deadline, pass that
context into installer.TypeOfInstaller and any exec.CommandContext invocation
that runs pkgutil, and ensure you call the returned cancel function to avoid
context leaks so the pkgutil process cannot hang forever.

---

Outside diff comments:
In `@client/internal/updater/doc.go`:
- Around line 1-24: Update the package documentation to clarify lifecycle:
change wording that implies always-on background monitoring to explicitly state
that updater.Start() is invoked only in the GUI flow via
startUpdateManagerForGUI(), while the daemon path only calls
CheckUpdateSuccess() (i.e., daemon does not run continuous monitoring).
Reference the Start(), startUpdateManagerForGUI(), and CheckUpdateSuccess()
symbols and update the Overview and Update Flow sections so steps describing
background monitoring and continuous operation are conditioned on the
GUI-invoked Start() path rather than presented as daemon behavior.

In `@client/internal/updater/manager.go`:
- Around line 179-205: The SetVersion logic can skip updating policy flags when
the version string is unchanged and leaves stale forceUpdate when
expectedVersion is empty; modify the SetVersion implementation so that: when
expectedVersion == "" you also reset m.forceUpdate = false (or set to the passed
forceUpdate value) and set downloadOnly/updateToLatestVersion consistently; and
when parsing yields an expectedSemVer equal to m.expectedVersion do NOT
early-return — instead update m.forceUpdate, m.downloadOnly and
m.updateToLatestVersion according to the incoming forceUpdate and latestVersion
comparison so policy transitions (force ↔ enforce and downloadOnly) are always
applied even if the semantic version object is unchanged. Ensure these changes
touch the SetVersion branch that checks expectedVersion == "" and the equality
check against expectedSemVer to avoid skipping policy updates.

In `@client/server/server.go`:
- Around line 1613-1622: The Up() path currently calls connectWithRetryRuns
which leads to connect() invoking
s.connectClient.SetUpdateManager(s.updateManager) while s.updateManager is nil
(unlike the Start() path which lazily initializes updateManager), so either
replicate the lazy initialization from Start() into the Up() path or explicitly
document that updateManager is intentionally nil for Up(). Update the Up()
method to perform the same lazy init used in Start() (initialize s.updateManager
via the same constructor/logic used in Start()) before calling
connectWithRetryRuns if updates should be available, or add a comment in Up()
clarifying that s.updateManager is deliberately left nil and
SetUpdateManager(nil) is expected.

---

Duplicate comments:
In `@client/proto/daemon.proto`:
- Around line 233-235: In the UpRequest protobuf message, reserve the removed
field tag and name to prevent accidental reuse by adding reserved entries for
the numeric tag 3 and the string "autoUpdate" (e.g., inside the message
UpRequest add reserved 3; and reserved "autoUpdate";) so the identifier
autoUpdate and tag 3 cannot be reintroduced; update the UpRequest message
declaration accordingly and re-run the provided rg check to verify reserved 3
and reserved "autoUpdate" are present.

In `@client/ui/event_handler.go`:
- Around line 237-240: The TriggerUpdate call currently uses h.client.ctx
directly and can hang, delaying the deferred re-enable; fix by creating a short
timeout context via ctx, cancel := context.WithTimeout(h.client.ctx,
5*time.Second) (or an appropriate timeout) before calling conn.TriggerUpdate and
pass that ctx to conn.TriggerUpdate, ensure defer cancel() runs immediately
after creation, and use the returned response/error as before so the earlier
deferred UI re-enable tied to this handler is not blocked.

ℹ️ Review info

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between ecdafa9 and aff938d.

⛔ Files ignored due to path filters (2)
  • client/internal/updater/reposign/certs/root-pub.pem is excluded by !**/*.pem
  • client/internal/updater/reposign/certsdev/root-pub.pem is excluded by !**/*.pem
📒 Files selected for processing (56)
  • client/cmd/signer/artifactkey.go
  • client/cmd/signer/artifactsign.go
  • client/cmd/signer/revocation.go
  • client/cmd/signer/rootkey.go
  • client/cmd/update_supported.go
  • client/internal/connect.go
  • client/internal/debug/debug.go
  • client/internal/engine.go
  • client/internal/updater/doc.go
  • client/internal/updater/downloader/downloader.go
  • client/internal/updater/downloader/downloader_test.go
  • client/internal/updater/installer/binary_nowindows.go
  • client/internal/updater/installer/binary_windows.go
  • client/internal/updater/installer/doc.go
  • client/internal/updater/installer/installer.go
  • client/internal/updater/installer/installer_common.go
  • client/internal/updater/installer/installer_log_darwin.go
  • client/internal/updater/installer/installer_log_windows.go
  • client/internal/updater/installer/installer_run_darwin.go
  • client/internal/updater/installer/installer_run_windows.go
  • client/internal/updater/installer/log.go
  • client/internal/updater/installer/procattr_darwin.go
  • client/internal/updater/installer/procattr_windows.go
  • client/internal/updater/installer/repourl_dev.go
  • client/internal/updater/installer/repourl_prod.go
  • client/internal/updater/installer/result.go
  • client/internal/updater/installer/types.go
  • client/internal/updater/installer/types_darwin.go
  • client/internal/updater/installer/types_windows.go
  • client/internal/updater/manager.go
  • client/internal/updater/manager_linux_test.go
  • client/internal/updater/manager_test.go
  • client/internal/updater/manager_test_helpers_test.go
  • client/internal/updater/reposign/artifact.go
  • client/internal/updater/reposign/artifact_test.go
  • client/internal/updater/reposign/doc.go
  • client/internal/updater/reposign/embed_dev.go
  • client/internal/updater/reposign/embed_prod.go
  • client/internal/updater/reposign/key.go
  • client/internal/updater/reposign/key_test.go
  • client/internal/updater/reposign/revocation.go
  • client/internal/updater/reposign/revocation_test.go
  • client/internal/updater/reposign/root.go
  • client/internal/updater/reposign/root_test.go
  • client/internal/updater/reposign/signature.go
  • client/internal/updater/reposign/signature_test.go
  • client/internal/updater/reposign/verify.go
  • client/internal/updater/reposign/verify_test.go
  • client/internal/updater/supported_darwin.go
  • client/internal/updater/supported_other.go
  • client/internal/updater/supported_windows.go
  • client/internal/updater/update.go
  • client/proto/daemon.proto
  • client/server/server.go
  • client/server/updateresult.go
  • client/ui/event_handler.go
✅ Files skipped from review due to trivial changes (1)
  • client/cmd/signer/artifactkey.go

Comment thread client/internal/updater/supported_darwin.go
mlsmaycon
mlsmaycon previously approved these changes Mar 3, 2026
lixmal
lixmal previously approved these changes Mar 4, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
client/ui/client_ui.go (1)

830-857: ⚠️ Potential issue | 🟡 Minor

Use the method context for Up RPC.

Line 852 ignores the ctx argument and calls Up with s.ctx. This drops caller cancellation/timeout semantics.

Proposed fix
-	if _, err := s.conn.Up(s.ctx, &proto.UpRequest{}); err != nil {
+	if _, err := conn.Up(ctx, &proto.UpRequest{}); err != nil {
 		return fmt.Errorf("start connection: %w", err)
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/ui/client_ui.go` around lines 830 - 857, The Up RPC in menuUpClick
uses s.conn.Up(s.ctx, ...) which ignores the caller's ctx and drops
cancellation/timeout; change the call to use the local conn and the passed-in
ctx (i.e., conn.Up(ctx, &proto.UpRequest{})) so the caller-provided context
controls cancellation/timeouts and you invoke the RPC on the same conn used for
Status.
🧹 Nitpick comments (1)
client/internal/updater/manager.go (1)

251-254: Remove duplicate comment line.

There are two comment lines that both describe the same thing for NotifyUI.

Proposed fix
 // NotifyUI re-publishes the current update state to a newly connected UI client.
 // Only needed for download-only mode where the latest version is already cached
-// NotifyUI re-publishes the current update state so a newly connected UI gets the info.
 func (m *Manager) NotifyUI() {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@client/internal/updater/manager.go` around lines 251 - 254, The comment for
NotifyUI contains a duplicated sentence; edit the doc comment above func (m
*Manager) NotifyUI to remove the redundant line so only a single, clear
description remains (keep the line that best reads—refer to the NotifyUI method
on type Manager to locate the comment).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@client/ui/client_ui.go`:
- Around line 830-857: The Up RPC in menuUpClick uses s.conn.Up(s.ctx, ...)
which ignores the caller's ctx and drops cancellation/timeout; change the call
to use the local conn and the passed-in ctx (i.e., conn.Up(ctx,
&proto.UpRequest{})) so the caller-provided context controls
cancellation/timeouts and you invoke the RPC on the same conn used for Status.

---

Nitpick comments:
In `@client/internal/updater/manager.go`:
- Around line 251-254: The comment for NotifyUI contains a duplicated sentence;
edit the doc comment above func (m *Manager) NotifyUI to remove the redundant
line so only a single, clear description remains (keep the line that best
reads—refer to the NotifyUI method on type Manager to locate the comment).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 35e98eab-ebd0-4725-8c1b-0585de80bc9d

📥 Commits

Reviewing files that changed from the base of the PR and between a4ae885 and 88fabee.

📒 Files selected for processing (3)
  • client/internal/updater/installer/result.go
  • client/internal/updater/manager.go
  • client/ui/client_ui.go

Comment thread client/proto/daemon.proto Outdated
pappz added 2 commits March 12, 2026 15:35
* [management] feat: add auto_update_always account setting

Add a new AutoUpdateAlways boolean setting that controls whether client
updates are installed automatically in the background or require user
interaction from the UI. Includes API, gRPC, activity events, and tests.

* Generate types
@pappz pappz requested a review from lixmal March 12, 2026 15:41
@sonarqubecloud
Copy link
Copy Markdown

@pappz pappz merged commit fe9b844 into main Mar 13, 2026
60 of 61 checks passed
@pappz pappz deleted the refactor/auto-update branch March 13, 2026 16:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants