Skip to content

[client] Fix netstack upstream dns and add wasm debug methods#4648

Merged
lixmal merged 10 commits intomainfrom
wasm-status
Jan 14, 2026
Merged

[client] Fix netstack upstream dns and add wasm debug methods#4648
lixmal merged 10 commits intomainfrom
wasm-status

Conversation

@lixmal
Copy link
Copy Markdown
Collaborator

@lixmal lixmal commented Oct 16, 2025

Describe your changes

  • Make upstream dns handler use netstack (js only)
  • Make dns interceptor use netstack to reach the dns forwarder of routing peers
  • Refactor status methods: ParseToJSON()JSON(), ParseToYAML()YAML(), ParseGeneralSummary()GeneralSummary(), ParseToFullDetailSummary()FullDetailSummary()
  • Move toProtoFullStatus()FullStatus.ToProto() method
  • Include events in FullStatus (remove GetEvents RPC)
  • Add embed client methods: Status(), GetLatestSyncResponse(), SetLogLevel()
  • Add netbird js client debug methods:
    • ping(hostname)
    • pingTCP(hostname, port)
    • status()
    • statusDetail()
    • statusSummary()
    • setLogLevel(level)
    • getSyncResponse()
  • Fix embed client Start() to track connect instead of cancel for state checking
  • Fix embed client Stop() to cancel context and clean up properly
  • Skip STUN/TURN probes on JS/WASM platform

Issue ticket number and link

Stack

Checklist

  • Is it a bug fix
  • Is a typo/documentation fix
  • Is a feature enhancement
  • It is a refactor
  • Created tests that fail without the change (if possible)

By submitting this pull request, you confirm that you have read and agree to the terms of the Contributor License Agreement.

Documentation

Select exactly one:

  • I added/updated documentation for this change
  • Documentation is not needed for this change (explain why)

Docs PR URL (required if "docs added" is checked)

Paste the PR link from https://github.com/netbirdio/docs here:

https://github.com/netbirdio/docs/pull/__

Summary by CodeRabbit

  • New Features

    • Client: richer status (includes event history), fetch latest sync response, and runtime log-level control.
    • WASM: exposed status (overview/summary/detail), getSyncResponse, ping and TCP-ping diagnostics.
  • Refactor

    • Status formatting now offered via object methods for simpler consumption.
    • DNS: added netstack-backed DNS path for JS environments; health probing skips network probes on WebAssembly.
  • Removed

    • Server RPC for polling historical events eliminated.

✏️ Tip: You can customize this high-level summary in your review settings.

Copilot AI review requested due to automatic review settings October 16, 2025 00:00
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR fixes netstack upstream DNS functionality and adds debug methods to the WASM client. The main focus is refactoring status-related functionality and improving DNS handling in netstack environments.

Key Changes:

  • Refactors status methods to use method receivers instead of standalone functions
  • Enhances netstack DNS support with upstream resolution through tunnel
  • Adds comprehensive debug methods (ping, status reporting, log level control) to WASM client

Reviewed Changes

Copilot reviewed 27 out of 27 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
client/wasm/cmd/main.go Adds extensive debug methods including ping, status reporting, and log level management
client/status/status.go Refactors standalone functions to methods on OutputOverview struct
client/internal/dns/upstream*.go Updates DNS resolvers to use netstack when available for tunnel routing
client/internal/peer/status.go Moves protobuf conversion logic and adds event history to FullStatus
client/server/server.go Removes duplicate protobuf conversion code, delegates to peer status
client/embed/embed.go Adds status, sync response, and log level management methods

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

Comment thread client/wasm/cmd/main.go Outdated
Comment thread client/wasm/cmd/main.go Outdated
Comment thread client/wasm/cmd/main.go Outdated
Comment thread client/internal/routemanager/dnsinterceptor/handler.go
@lixmal lixmal changed the title [client] Fix netstack upstream dns and add debug methods [client] Fix netstack upstream dns and add wasm debug methods Oct 16, 2025
@lixmal lixmal marked this pull request as ready for review October 16, 2025 00:43
@sonarqubecloud
Copy link
Copy Markdown

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Nov 27, 2025

Note

Other AI code review bot(s) detected

CodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review.

📝 Walkthrough

Walkthrough

Refactors status formatting helpers into methods on OutputOverview; extends embedded client with a recorder, status/sync retrieval, and SetLogLevel; integrates netstack-backed DNS exchange with a WGIface.GetNet() accessor and JS-path routing; adds FullStatus→proto conversion and updates server RPCs/callsites.

Changes

Cohort / File(s) Summary
Status methods & callers
client/status/status.go, client/status/status_test.go, client/cmd/debug.go, client/cmd/status.go, client/ui/debug.go
Converted standalone Parse* helpers into OutputOverview methods (JSON(), YAML(), GeneralSummary(...), FullDetailSummary()); updated tests and all call sites.
Embedded client: status, sync, log control
client/embed/embed.go
Added recorder *peer.Status, cancellable start/stop flow, and public methods Status(), GetLatestSyncResponse(), SetLogLevel(string); engine startup/stop logic adjusted.
Connect client log proxy
client/internal/connect.go
Added SetLogLevel(level log.Level) on ConnectClient that forwards to engine/firewall manager when available.
FullStatus → proto mapping
client/internal/peer/status.go, client/server/server.go
Added Events []*proto.SystemEvent to FullStatus and FullStatus.ToProto(); server now calls fullStatus.ToProto() and removed local converter.
Server RPCs & handlers
client/server/event.go, client/server/debug.go
Removed GetEvents() RPC handler; simplified SetLogLevel to delegate to connectClient.SetLogLevel().
WGIface extension (netstack)
client/internal/dns/wgiface.go, client/internal/dns/wgiface_windows.go, client/internal/routemanager/iface/iface_common.go
Added GetNet() *netstack.Net to WGIface/wgIfaceBase and updated imports.
DNS upstream: netstack integration & API change
client/internal/dns/upstream.go, client/internal/dns/upstream_general.go, client/internal/dns/upstream_ios.go, client/internal/dns/upstream_android.go
newUpstreamResolver() now accepts a WGIface instead of separate interface name/IP/net; upstream resolver gains nsNet *netstack.Net; added ExchangeWithNetstack() and conditional netstack path (used for JS builds).
DNS interceptor & handler refactor
client/internal/routemanager/dnsinterceptor/handler.go
Extracted upstream query into queryUpstreamDNS() (uses netstack when available); changed interceptor wgInterface to iface.WGIface.
DNS tests & mocks
client/internal/dns/server_test.go, client/internal/dns/upstream_test.go
Added GetNet() to mock WGIface; introduced mockNetstackProvider and updated tests to exercise netstack code paths.
Engine health probes (JS path)
client/internal/engine.go
Skip STUN/TURN probing on GOOS==js; relay health assumed healthy on JS path.
WASM client features
client/wasm/cmd/main.go
Added JS bindings for status/statusSummary/statusDetail/getSyncResponse/setLogLevel, ping/pingTCP, input validation, and JSON/proto wiring for status and sync responses.

Sequence Diagram(s)

sequenceDiagram
    participant Embedded as Embedded Client
    participant Connect as ConnectClient
    participant Engine as Engine
    participant FW as FirewallManager

    Embedded->>Embedded: SetLogLevel(levelStr)
    Embedded->>Connect: SetLogLevel(log.Level)
    alt Connect has Engine
        Connect->>Engine: GetEngine()
        Engine-->>Connect: engine
        Connect->>FW: GetFirewallManager()
        alt FW exists
            FW->>FW: SetLogLevel(level)
        end
    end

    Embedded->>Embedded: Status() -> reads recorder, may trigger health probes via engine
    Embedded->>Engine: GetLatestSyncResponse() -> returns SyncResponse
Loading
sequenceDiagram
    participant Resolver as UpstreamResolver
    participant WGI as WGIface
    participant NetStack as netstack.Net
    participant DNSClient as standard DNS client

    Resolver->>WGI: GetNet(), Address(), Name()
    alt nsNet present and GOOS == js
        Resolver->>NetStack: ExchangeWithNetstack(ctx, nsNet, msg, upstream)
        NetStack-->>Resolver: reply / error
    else
        Resolver->>DNSClient: standard DNS exchange (UDP -> TCP fallback)
        DNSClient-->>Resolver: reply / error
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related PRs

Suggested reviewers

  • pappz

Poem

🐇 I nibble bytes and tidy trails,

Methods hop where functions grew,
Netstack weaves through DNS vales,
Status, sync, and logs anew,
A rabbit claps — code spring is due.

🚥 Pre-merge checks | ✅ 2 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 57.89% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title directly describes the main changes: netstack DNS fixes for upstream DNS and addition of WASM debug methods.
Description check ✅ Passed The description comprehensively covers all major changes including DNS netstack integration, status method refactoring, embed client methods, and WASM debug methods. All required template sections are addressed with detailed bullet points and checkbox selections.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings

🧹 Recent nitpick comments
client/embed/embed.go (1)

42-42: Consider resetting recorder in Stop().

The recorder field is set in Start() but not cleared in Stop(). After Stop() completes, Status() will still return data from the stale recorder (line 342-343 only checks if recorder is nil, not if client is stopped). This could be intentional to allow reading final status after stop, but if not, consider setting c.recorder = nil in Stop() alongside clearing c.connect.


📜 Recent review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6a5582d and 6ec5a3f.

📒 Files selected for processing (2)
  • client/embed/embed.go
  • client/internal/peer/status.go
🧰 Additional context used
🧬 Code graph analysis (2)
client/internal/peer/status.go (1)
client/proto/daemon.pb.go (24)
  • SystemEvent (3804-3815)
  • SystemEvent (3828-3828)
  • SystemEvent (3843-3845)
  • FullStatus (2140-2154)
  • FullStatus (2167-2167)
  • FullStatus (2182-2184)
  • ManagementState (1820-1827)
  • ManagementState (1840-1840)
  • ManagementState (1855-1857)
  • SignalState (1759-1766)
  • SignalState (1779-1779)
  • SignalState (1794-1796)
  • LocalPeerState (1666-1677)
  • LocalPeerState (1690-1690)
  • LocalPeerState (1705-1707)
  • PeerState (1485-1507)
  • PeerState (1520-1520)
  • PeerState (1535-1537)
  • RelayState (1881-1888)
  • RelayState (1901-1901)
  • RelayState (1916-1918)
  • NSGroupState (1941-1949)
  • NSGroupState (1962-1962)
  • NSGroupState (1977-1979)
client/embed/embed.go (5)
client/internal/peer/status.go (3)
  • Status (187-222)
  • NewRecorder (225-236)
  • FullStatus (152-163)
client/internal/connect.go (1)
  • NewConnectClient (57-71)
client/proto/daemon.pb.go (3)
  • FullStatus (2140-2154)
  • FullStatus (2167-2167)
  • FullStatus (2182-2184)
client/internal/engine.go (1)
  • Engine (138-214)
shared/management/proto/management.pb.go (3)
  • SyncResponse (388-405)
  • SyncResponse (420-420)
  • SyncResponse (435-437)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (21)
  • GitHub Check: Relay / Unit (386)
  • GitHub Check: Relay / Unit (amd64, -race)
  • GitHub Check: Management / Unit (amd64, mysql)
  • GitHub Check: Client (Docker) / Unit
  • GitHub Check: Management / Unit (amd64, postgres)
  • GitHub Check: Client / Unit (386)
  • GitHub Check: Management / Unit (amd64, sqlite)
  • GitHub Check: Client / Unit (amd64)
  • GitHub Check: JS / Lint
  • GitHub Check: release_ui
  • GitHub Check: release
  • GitHub Check: Client / Unit
  • GitHub Check: Windows
  • GitHub Check: Darwin
  • GitHub Check: Client / Unit
  • GitHub Check: release_ui_darwin
  • GitHub Check: Linux
  • GitHub Check: iOS / Build
  • GitHub Check: FreeBSD Port / Build & Test
  • GitHub Check: Client / Unit
  • GitHub Check: Android / Build
🔇 Additional comments (10)
client/embed/embed.go (5)

163-213: LGTM!

The Start() method changes are well-structured:

  • Proper cancellable context with cleanup on failure via deferred cancel.
  • Correct order of assignment (c.connect before c.cancel) under mutex protection.
  • Clean handling of startup timeout and error cases.

217-247: LGTM!

The Stop() method correctly:

  • Captures c.connect to a local variable before the goroutine to avoid the race condition.
  • Cancels the context before stopping the client.
  • Cleans up c.connect in both timeout and normal completion paths.

335-354: LGTM with a minor note.

The Status() method follows good concurrency patterns by copying references under the lock then releasing. The discarded error from RunHealthProbes (line 349) appears intentional for best-effort status refresh, which is reasonable.


356-369: LGTM!

Clean implementation that follows the established pattern of using getEngine() for safe engine access with proper error handling.


371-389: LGTM!

The SetLogLevel() method is well-designed:

  • Validates input before applying changes.
  • Minimizes lock scope by copying c.connect under lock then releasing.
  • Gracefully handles the case when client isn't running by still setting the global log level.
client/internal/peer/status.go (5)

17-17: LGTM!

Import is correctly added and used for the Latency duration conversion in ToProto().


162-162: LGTM!

The Events field addition aligns with the PR objective to include events in FullStatus and uses the correct proto type.


986-986: LGTM!

The GetEventHistory() call is thread-safe as EventQueue maintains its own internal mutex, avoiding contention with d.mux.


1188-1195: LGTM!

The ToProto() method correctly maps internal state to proto representation with proper handling of:

  • Timestamp conversions via timestamppb.New()
  • Duration conversions via durationpb.New()
  • Error-to-string conversions with nil checks
  • Relay availability derived from error state

Also applies to: 1246-1255


1257-1275: The proto NSGroupState message does not include an ID field, so the current mapping is correct and complete according to the proto contract. While the internal NSGroupState struct contains an ID field, this is intentionally omitted from the protobuf definition.

✏️ Tip: You can disable this entire section by setting review_details to false in your review settings.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
client/internal/dns/upstream_general.go (1)

38-52: Based on my verification, I now have enough information to rewrite the review comment. The investigation confirms that:

  1. The timing value IS used by callers (specifically at line 186 in upstream.go where it's passed to writeSuccessResponse)
  2. ExchangeWithNetstack returns only (*dns.Msg, error) while the caller expects (*dns.Msg, time.Duration, error)
  3. The exchange function returns hardcoded 0 for timing on the netstack path
  4. This creates an inconsistency where netstack queries report 0 duration while fallback queries report actual duration

This is a legitimate issue, not just an optional suggestion.


Track netstack duration for consistent metrics.

The netstack path returns a hardcoded 0 for timing while ExchangeWithFallback measures and returns actual duration. This creates inconsistent metrics since the timing value is used downstream (see writeSuccessResponse at line 186 of upstream.go).

The underlying issue is that ExchangeWithNetstack doesn't track duration—it only returns (*dns.Msg, error). Consider measuring the duration within the netstack path to ensure consistent timing metrics across both DNS resolution paths.

♻️ Duplicate comments (1)
client/wasm/cmd/main.go (1)

248-267: Same pattern issue: ping method should propagate success/failure.

Similar to ICMP ping, createPingMethod always resolves regardless of whether performPing succeeded or failed. This makes the Promise semantics misleading.

🧹 Nitpick comments (6)
client/internal/engine.go (1)

1696-1714: Consider the implications of defaulting relay health to true on JS/WASM.

When STUN/TURN probing is skipped on JS/WASM platforms, relayHealthy remains true by default (Line 1697). This means relay connectivity issues on these platforms will not be detected, potentially masking real problems.

Consider whether a neutral/"unknown" state would be more appropriate, or whether a lighter-weight connectivity check could be performed instead of the full probe.

client/internal/connect.go (1)

376-387: Consider extending log level control to other engine components.

The current implementation only sets the log level on the FirewallManager. Depending on the requirements, you may want to propagate the log level to other components like the RouteManager, DNS server, or ACL manager for comprehensive logging control.

client/internal/routemanager/dnsinterceptor/handler.go (1)

561-597: Well-structured helper with netstack fallback.

The queryUpstreamDNS method cleanly encapsulates the dual-path DNS resolution:

  • Uses netstack when available (nsNet != nil)
  • Falls back to regular DNS client otherwise

The error handling is comprehensive with specific timeout diagnostics.

One minor observation: The error response at line 593 could include r.Question for better client compatibility, though the current minimal response is valid per RFC.

-	if err := w.WriteMsg(&dns.Msg{MsgHdr: dns.MsgHdr{Rcode: dns.RcodeServerFailure, Id: r.Id}}); err != nil {
+	resp := &dns.Msg{}
+	resp.SetRcode(r, dns.RcodeServerFailure)
+	if err := w.WriteMsg(resp); err != nil {

This would be consistent with how writeDNSError constructs responses at lines 271-273.

client/status/status.go (1)

344-531: Consider splitting GeneralSummary to reduce complexity.

SonarCloud correctly flags this method with cognitive complexity of 63 (allowed: 20) and 172 lines (allowed: 100). While this is a refactor from existing code, consider extracting helper methods for distinct sections:

  • formatManagementState() for lines 346-357
  • formatSignalState() for lines 359-370
  • formatRelays() for lines 381-400
  • formatNameservers() for lines 408-435
  • formatSSHServer() for lines 450-483

This would improve maintainability and testability of individual formatting logic.

client/wasm/cmd/main.go (2)

154-164: Inconsistent error return pattern for missing arguments.

When validateSSHArgs returns the specific string "error: requires host and port", it's returned directly as a js.Value (line 159), but type validation errors are rejected via promise (lines 161-163). This creates inconsistent API behavior for callers.

Consider unifying to always return a rejected promise for all validation errors:

 func createSSHMethod(client *netbird.Client) js.Func {
 	return js.FuncOf(func(this js.Value, args []js.Value) any {
 		host, port, username, validationErr := validateSSHArgs(args)
 		if !validationErr.IsUndefined() {
-			if validationErr.Type() == js.TypeString && validationErr.String() == "error: requires host and port" {
-				return validationErr
-			}
 			return createPromise(func(resolve, reject js.Value) {
 				reject.Invoke(validationErr)
 			})
 		}

193-226: Ping always resolves even on failure; consider rejecting on error.

performPing logs errors to console but createPingMethod always resolves with js.Undefined() (line 264). This makes it impossible for callers to programmatically detect ping failures.

Consider returning the latency on success or rejecting on failure:

-func performPing(client *netbird.Client, hostname string) {
+func performPing(client *netbird.Client, hostname string) (time.Duration, error) {
 	ctx, cancel := context.WithTimeout(context.Background(), pingTimeout)
 	defer cancel()

 	start := time.Now()
 	conn, err := client.Dial(ctx, "ping", hostname)
 	if err != nil {
-		js.Global().Get("console").Call("log", fmt.Sprintf("Ping to %s failed: %v", hostname, err))
-		return
+		return 0, fmt.Errorf("dial failed: %w", err)
 	}
 	// ... rest of implementation
 	latency := time.Since(start)
 	js.Global().Get("console").Call("log", fmt.Sprintf("Ping to %s: %dms", hostname, latency.Milliseconds()))
+	return latency, nil
 }

Then in createPingMethod:

latency, err := performPing(client, hostname)
if err != nil {
    reject.Invoke(js.ValueOf(err.Error()))
    return
}
resolve.Invoke(js.ValueOf(latency.Milliseconds()))
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between aca0398 and 2ccd4ae.

📒 Files selected for processing (24)
  • client/cmd/debug.go (1 hunks)
  • client/cmd/status.go (1 hunks)
  • client/embed/embed.go (6 hunks)
  • client/internal/connect.go (1 hunks)
  • client/internal/dns/server.go (3 hunks)
  • client/internal/dns/server_test.go (2 hunks)
  • client/internal/dns/upstream.go (2 hunks)
  • client/internal/dns/upstream_android.go (1 hunks)
  • client/internal/dns/upstream_general.go (1 hunks)
  • client/internal/dns/upstream_ios.go (2 hunks)
  • client/internal/dns/upstream_test.go (3 hunks)
  • client/internal/dns/wgiface.go (2 hunks)
  • client/internal/dns/wgiface_windows.go (2 hunks)
  • client/internal/engine.go (1 hunks)
  • client/internal/peer/status.go (4 hunks)
  • client/internal/routemanager/dnsinterceptor/handler.go (4 hunks)
  • client/internal/routemanager/iface/iface_common.go (2 hunks)
  • client/server/debug.go (1 hunks)
  • client/server/event.go (0 hunks)
  • client/server/server.go (1 hunks)
  • client/status/status.go (6 hunks)
  • client/status/status_test.go (4 hunks)
  • client/ui/debug.go (3 hunks)
  • client/wasm/cmd/main.go (6 hunks)
💤 Files with no reviewable changes (1)
  • client/server/event.go
🧰 Additional context used
🧬 Code graph analysis (11)
client/cmd/debug.go (1)
client/status/status.go (1)
  • ConvertToStatusOutputOverview (118-167)
client/internal/dns/wgiface.go (1)
client/internal/stdnet/stdnet.go (1)
  • Net (32-45)
client/internal/engine.go (1)
client/internal/relay/relay.go (1)
  • ProbeResult (30-34)
client/internal/peer/status.go (1)
client/proto/daemon.pb.go (21)
  • SystemEvent (3658-3669)
  • SystemEvent (3682-3682)
  • SystemEvent (3697-3699)
  • FullStatus (1994-2008)
  • FullStatus (2021-2021)
  • FullStatus (2036-2038)
  • ManagementState (1682-1689)
  • ManagementState (1702-1702)
  • ManagementState (1717-1719)
  • SignalState (1621-1628)
  • SignalState (1641-1641)
  • SignalState (1656-1658)
  • LocalPeerState (1528-1539)
  • LocalPeerState (1552-1552)
  • LocalPeerState (1567-1569)
  • PeerState (1347-1369)
  • PeerState (1382-1382)
  • PeerState (1397-1399)
  • NSGroupState (1803-1811)
  • NSGroupState (1824-1824)
  • NSGroupState (1839-1841)
client/internal/dns/upstream_ios.go (4)
client/internal/dns/wgiface.go (1)
  • WGIface (15-23)
client/internal/dns/wgiface_windows.go (1)
  • WGIface (11-19)
client/iface/iface.go (1)
  • WGIface (69-77)
client/iface/wgaddr/address.go (1)
  • Address (9-12)
client/embed/embed.go (5)
client/internal/peer/status.go (2)
  • Status (187-222)
  • NewRecorder (225-236)
client/internal/state.go (1)
  • CtxInitState (24-28)
client/internal/connect.go (1)
  • NewConnectClient (51-63)
client/internal/engine.go (1)
  • Engine (138-211)
shared/management/proto/management.pb.go (3)
  • SyncResponse (388-405)
  • SyncResponse (420-420)
  • SyncResponse (435-437)
client/internal/routemanager/dnsinterceptor/handler.go (5)
client/internal/dns/wgiface.go (1)
  • WGIface (15-23)
client/internal/dns/upstream.go (1)
  • ExchangeWithNetstack (420-434)
client/internal/dns/upstream_general.go (1)
  • GetClientPrivate (54-59)
client/internal/dns/upstream_ios.go (1)
  • GetClientPrivate (82-117)
client/iface/wgaddr/address.go (1)
  • Address (9-12)
client/status/status.go (3)
client/internal/peer/status.go (2)
  • ManagementState (129-133)
  • SignalState (122-126)
client/proto/daemon.pb.go (9)
  • ManagementState (1682-1689)
  • ManagementState (1702-1702)
  • ManagementState (1717-1719)
  • SignalState (1621-1628)
  • SignalState (1641-1641)
  • SignalState (1656-1658)
  • SSHServerState (1941-1947)
  • SSHServerState (1960-1960)
  • SSHServerState (1975-1977)
version/version.go (1)
  • NetbirdVersion (18-20)
client/internal/dns/server_test.go (1)
client/internal/stdnet/stdnet.go (1)
  • Net (32-45)
client/internal/dns/upstream_android.go (3)
client/internal/dns/wgiface.go (1)
  • WGIface (15-23)
client/internal/dns/wgiface_windows.go (1)
  • WGIface (11-19)
client/iface/iface.go (1)
  • WGIface (69-77)
client/wasm/cmd/main.go (3)
client/status/status.go (2)
  • OutputOverview (96-116)
  • ConvertToStatusOutputOverview (118-167)
client/proto/daemon.pb.go (6)
  • StatusResponse (911-920)
  • StatusResponse (933-933)
  • StatusResponse (948-950)
  • FullStatus (1994-2008)
  • FullStatus (2021-2021)
  • FullStatus (2036-2038)
version/version.go (1)
  • NetbirdVersion (18-20)
🪛 GitHub Check: SonarCloud Code Analysis
client/status/status.go

[failure] 345-345: Refactor this method to reduce its Cognitive Complexity from 63 to the 20 allowed.

See more on https://sonarcloud.io/project/issues?id=netbirdio_netbird&issues=AZrExZQgSO_oWG_qbIZc&open=AZrExZQgSO_oWG_qbIZc&pullRequest=4648


[warning] 345-345: This function has 172 lines of code, which is greater than the 100 authorized. Split it into smaller functions.

See more on https://sonarcloud.io/project/issues?id=netbirdio_netbird&issues=AZrExZQgSO_oWG_qbIZd&open=AZrExZQgSO_oWG_qbIZd&pullRequest=4648

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (21)
  • GitHub Check: Client / Unit (amd64)
  • GitHub Check: Management / Unit (amd64, postgres)
  • GitHub Check: Management / Unit (amd64, sqlite)
  • GitHub Check: Relay / Unit (amd64, -race)
  • GitHub Check: Relay / Unit (386)
  • GitHub Check: Client (Docker) / Unit
  • GitHub Check: Management / Unit (amd64, mysql)
  • GitHub Check: Signal / Unit (386)
  • GitHub Check: Client / Unit (386)
  • GitHub Check: release_ui
  • GitHub Check: release_ui_darwin
  • GitHub Check: release
  • GitHub Check: Linux
  • GitHub Check: Darwin
  • GitHub Check: Windows
  • GitHub Check: Client / Unit
  • GitHub Check: Client / Unit
  • GitHub Check: iOS / Build
  • GitHub Check: Client / Unit
  • GitHub Check: Android / Build
  • GitHub Check: JS / Lint
🔇 Additional comments (44)
client/internal/peer/status.go (3)

162-162: LGTM!

The Events field appropriately extends FullStatus to include system event history.


986-986: LGTM!

Event history is correctly populated in the full status.


1187-1279: LGTM!

The ToProto() method correctly converts FullStatus to proto.FullStatus with proper handling of:

  • Error conversions (nil-safe)
  • Duration conversion using durationpb
  • All nested state structures
  • Event history attachment
client/internal/routemanager/iface/iface_common.go (1)

7-8: LGTM!

The GetNet() method addition properly extends the wgIfaceBase interface to expose netstack access, aligning with the PR's netstack-based DNS integration.

Also applies to: 23-23

client/internal/dns/wgiface_windows.go (1)

4-5: LGTM!

The GetNet() method addition to the Windows-specific WGIface interface maintains consistency with the non-Windows version.

Also applies to: 17-17

client/internal/dns/server_test.go (1)

18-18: LGTM!

The mock GetNet() implementation appropriately returns nil for testing scenarios where netstack functionality is not exercised.

Also applies to: 85-87

client/ui/debug.go (1)

444-444: LGTM!

The refactoring from package function calls to receiver methods improves the API design and is consistently applied across all usage sites.

Also applies to: 461-461, 598-598

client/internal/dns/wgiface.go (1)

8-9: LGTM!

The GetNet() method addition to the non-Windows WGIface interface is consistent with the Windows version and properly supports the netstack-based DNS functionality.

Also applies to: 22-22

client/cmd/debug.go (1)

316-317: LGTM!

The refactor from nbstatus.ParseToFullDetailSummary(nbstatus.ConvertToStatusOutputOverview(...)) to the two-step approach with a method call aligns with the PR's stated goal of refactoring status methods to receiver methods on OutputOverview.

client/internal/dns/upstream_test.go (2)

119-130: LGTM!

The mockNetstackProvider correctly implements the WGIface interface with appropriate zero/nil return values for testing purposes. Returning nil from GetNet() ensures tests use the fallback DNS client path rather than netstack, which is appropriate for unit tests that don't require netstack integration.


65-65: LGTM!

The test correctly passes the mock provider to newUpstreamResolver, aligning with the updated function signature that now accepts a WGIface interface instead of decomposed fields.

client/server/debug.go (1)

176-178: LGTM!

The simplified delegation to s.connectClient.SetLogLevel(level) is cleaner than directly interacting with engine and firewall manager. The nil check on s.connectClient ensures safe operation when the client isn't initialized.

client/internal/dns/server.go (3)

610-616: LGTM!

The change to pass s.wgInterface directly to newUpstreamResolver enables netstack-based DNS resolution by allowing the resolver to access GetNet(). This is consistent with the PR's goal of making upstream DNS use netstack for JS/WASM platforms.


717-726: LGTM!

Consistent with the other call sites, passing s.wgInterface enables the unified interface-based approach for upstream resolver creation.


898-908: LGTM!

The addHostRootZone method correctly adopts the same pattern of passing the whole wgInterface to newUpstreamResolver, maintaining consistency across all call sites.

client/server/server.go (1)

1079-1084: LGTM!

The refactor from toProtoFullStatus(fullStatus) helper function to fullStatus.ToProto() method is a cleaner object-oriented pattern. The FullStatus type now owns its protobuf conversion logic, improving cohesion.

client/internal/routemanager/dnsinterceptor/handler.go (2)

25-25: LGTM!

The field type change from a local interface to iface.WGIface aligns with the unified interface approach across the DNS subsystem, enabling consistent access to netstack via GetNet().

Also applies to: 49-49


255-260: LGTM!

Extracting the DNS query logic into queryUpstreamDNS improves readability and encapsulates the netstack/fallback decision in one place. The nil check on return correctly handles the error path.

client/internal/dns/upstream.go (3)

22-22: LGTM!

The netstack import is correctly added to support the new netstack-based DNS exchange functionality.


418-434: LGTM!

The ExchangeWithNetstack function correctly implements the DNS exchange via netstack with proper UDP→TCP fallback for truncated responses, mirroring the existing ExchangeWithFallback pattern.


436-465: LGTM!

The netstackExchange helper is well-implemented:

  • Connection is properly closed via defer with error logging
  • Context deadline is correctly applied to the connection
  • Uses dns.Conn wrapper for proper DNS message framing
  • Error messages include the network type for easier debugging
client/embed/embed.go (5)

166-175: LGTM!

The lifecycle management is well-designed. The defer ensures proper cleanup by calling cancel() only when startup fails (when c.connect remains nil), preventing context leaks.


225-228: LGTM!

Proper cleanup sequence: cancel the context before stopping the connect client, and nil out the cancel function to prevent double-cancellation.


334-353: Verify error behavior when health probes fail.

The implementation correctly copies values under lock to avoid holding the mutex during engine operations. However, errors from RunHealthProbes are silently discarded. This appears intentional for best-effort status, but verify this aligns with expected behavior.


355-368: LGTM!

Follows the established pattern of using getEngine() for safe engine access with proper error propagation.


370-388: LGTM!

The log level is set globally first, then propagated to the connect client if running. The pattern of copying connect under lock before using it is consistent with other methods.

client/internal/dns/upstream_android.go (1)

24-30: LGTM!

The signature change aligns with the broader refactor to use WGIface consistently across platforms. The underscore correctly indicates this parameter is intentionally unused on Android, where netstack-based DNS isn't needed.

client/internal/dns/upstream_ios.go (1)

27-45: LGTM!

Clean refactor that properly derives values from the WGIface interface:

  • lIP from wgIface.Address().IP
  • lNet from wgIface.Address().Network
  • interfaceName from wgIface.Name()

This aligns with the broader interface abstraction pattern across platforms.

client/cmd/status.go (1)

104-113: LGTM!

Clean refactor from package-level functions to methods on outputInformationHolder. The switch statement correctly handles:

  • FullDetailSummary() for detailed output
  • JSON() and YAML() with error returns
  • GeneralSummary(false, false, false, false) as the default

Error handling at line 115 properly catches errors from JSON/YAML methods.

client/internal/dns/upstream_general.go (2)

17-20: LGTM: Netstack integration for upstream resolver.

The addition of nsNet field to enable netstack-based DNS resolution is clean. The field is correctly typed and will be nil when netstack is unavailable.


22-36: Constructor signature change is well-designed.

Passing WGIface instead of individual parameters simplifies the API and enables access to netstack via GetNet(). This aligns with the iOS upstream resolver pattern mentioned in the summary.

client/status/status.go (3)

326-333: LGTM: Clean JSON serialization method.

Simple and effective wrapper around json.Marshal. Error handling is appropriate.


335-342: LGTM: Clean YAML serialization method.

Consistent with the JSON method pattern.


533-549: LGTM: FullDetailSummary correctly composes output.

Clean delegation to GeneralSummary with all detail flags enabled, plus peer and event parsing.

client/wasm/cmd/main.go (6)

31-35: Good: Magic numbers replaced with named constants.

This addresses the previous review feedback. The constants icmpEchoRequest, icmpCodeEcho, and pingBufferSize are now clearly defined.


125-151: LGTM: Centralized SSH argument validation.

Good extraction of validation logic into a dedicated function. The validation is thorough and handles the optional username parameter correctly.


345-359: LGTM: Status overview construction is correct.

Properly converts embedded client status to proto format and uses the new ConvertToStatusOutputOverview API.


361-380: LGTM: Status JSON method returns parsed JS object.

Good approach to parse the JSON string back into a JS object for easier consumption in JavaScript.


416-441: LGTM: Sync response method with proper protojson options.

Using EmitUnpopulated: true ensures consistent output even for zero values, which is helpful for debugging.


443-466: LGTM: Log level method with proper validation.

Clean implementation with appropriate input validation and error handling.

client/status/status_test.go (4)

270-271: LGTM: Test updated for new JSON method API.

Correctly uses the new overview.JSON() method.


406-407: LGTM: Test updated for new YAML method API.

Correctly uses the new overview.YAML() method.


514-514: LGTM: Test updated for new FullDetailSummary method API.

Correctly uses the new overview.FullDetailSummary() method.


577-578: LGTM: Test updated for new GeneralSummary method API.

Correctly uses the new overview.GeneralSummary() method with all boolean parameters.

@sonarqubecloud
Copy link
Copy Markdown

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (5)
client/status/status.go (2)

328-335: Consider preserving the original error context.

The error wrapping discards the original marshal error which could be useful for debugging. Also, line 334 returns err which is nil at that point.

♻️ Suggested improvement
 func (o *OutputOverview) JSON() (string, error) {
 	jsonBytes, err := json.Marshal(o)
 	if err != nil {
-		return "", fmt.Errorf("json marshal failed")
+		return "", fmt.Errorf("json marshal failed: %w", err)
 	}
-	return string(jsonBytes), err
+	return string(jsonBytes), nil
 }

337-344: Consider preserving the original error context.

Same as above - wrapping the original error would help with debugging.

♻️ Suggested improvement
 func (o *OutputOverview) YAML() (string, error) {
 	yamlBytes, err := yaml.Marshal(o)
 	if err != nil {
-		return "", fmt.Errorf("yaml marshal failed")
+		return "", fmt.Errorf("yaml marshal failed: %w", err)
 	}
 	return string(yamlBytes), nil
 }
client/internal/engine.go (1)

1752-1770: LGTM with minor suggestion for consistent logging.

The conditional skip for STUN/TURN probing on JS/WASM is correctly implemented. The relayHealthy is properly initialized to true before the check, ensuring correct behavior on both platforms.

Consider adding a debug log for JS platforms to maintain consistent observability:

💡 Optional improvement for logging consistency
 	// Skip STUN/TURN probing for JS/WASM as it's not available
 	relayHealthy := true
 	if runtime.GOOS != "js" {
 		var results []relay.ProbeResult
 		if waitForResult {
 			results = e.probeStunTurn.ProbeAllWaitResult(e.ctx, stuns, turns)
 		} else {
 			results = e.probeStunTurn.ProbeAll(e.ctx, stuns, turns)
 		}
 		e.statusRecorder.UpdateRelayStates(results)

 		for _, res := range results {
 			if res.Err != nil {
 				relayHealthy = false
 				break
 			}
 		}
 		log.Debugf("relay health check: healthy=%t", relayHealthy)
+	} else {
+		log.Debugf("relay health check: skipped on JS/WASM, assuming healthy")
 	}
client/internal/routemanager/dnsinterceptor/handler.go (1)

26-26: Redundant import alias.

The import alias iface matches the package name, making it unnecessary. Consider removing the alias for clarity.

-	iface "github.com/netbirdio/netbird/client/internal/routemanager/iface"
+	"github.com/netbirdio/netbird/client/internal/routemanager/iface"
client/internal/dns/upstream.go (1)

440-455: Consider aligning UDP buffer size handling between netstack and standard DNS paths.

The netstack path uses dns.Conn which doesn't set a UDP buffer size, whereas ExchangeWithFallback explicitly sets client.UDPSize to MTU-68 before exchange. This means netstackExchange defaults to the 512-byte DNS limit while the standard path advertises a much larger buffer. Both paths correctly handle truncation via TCP fallback, but the inconsistency could result in different truncation thresholds. To match the behavior of ExchangeWithFallback, consider adding EDNS0 options to the outgoing message in ExchangeWithNetstack to advertise the same buffer size.

📜 Review details

Configuration used: defaults

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between feb3a94 and 6a5582d.

📒 Files selected for processing (11)
  • client/cmd/debug.go
  • client/cmd/status.go
  • client/embed/embed.go
  • client/internal/connect.go
  • client/internal/dns/server.go
  • client/internal/dns/server_test.go
  • client/internal/dns/upstream.go
  • client/internal/engine.go
  • client/internal/routemanager/dnsinterceptor/handler.go
  • client/server/server.go
  • client/status/status.go
🚧 Files skipped from review as they are similar to previous changes (3)
  • client/internal/connect.go
  • client/cmd/debug.go
  • client/internal/dns/server.go
🧰 Additional context used
🧬 Code graph analysis (4)
client/internal/routemanager/dnsinterceptor/handler.go (4)
client/internal/dns/wgiface.go (1)
  • WGIface (15-23)
client/internal/dns/upstream.go (2)
  • ExchangeWithNetstack (424-438)
  • ExchangeWithFallback (377-420)
client/internal/dns/upstream_general.go (1)
  • GetClientPrivate (55-60)
client/iface/wgaddr/address.go (1)
  • Address (9-12)
client/status/status.go (4)
client/proto/daemon.pb.go (9)
  • ManagementState (1820-1827)
  • ManagementState (1840-1840)
  • ManagementState (1855-1857)
  • SignalState (1759-1766)
  • SignalState (1779-1779)
  • SignalState (1794-1796)
  • SSHServerState (2087-2093)
  • SSHServerState (2106-2106)
  • SSHServerState (2121-2123)
client/internal/peer/status.go (2)
  • ManagementState (129-133)
  • SignalState (122-126)
management/server/permissions/modules/module.go (3)
  • Networks (6-6)
  • Peers (7-7)
  • Events (13-13)
version/version.go (1)
  • NetbirdVersion (18-20)
client/internal/engine.go (1)
client/internal/relay/relay.go (1)
  • ProbeResult (30-34)
client/internal/dns/server_test.go (1)
client/internal/stdnet/stdnet.go (1)
  • Net (32-45)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (18)
  • GitHub Check: Relay / Unit (amd64, -race)
  • GitHub Check: Management / Unit (amd64, mysql)
  • GitHub Check: Management / Unit (amd64, postgres)
  • GitHub Check: Client / Unit (amd64)
  • GitHub Check: Relay / Unit (386)
  • GitHub Check: Client (Docker) / Unit
  • GitHub Check: Client / Unit (386)
  • GitHub Check: Management / Unit (amd64, sqlite)
  • GitHub Check: Android / Build
  • GitHub Check: release_ui
  • GitHub Check: Client / Unit
  • GitHub Check: Client / Unit
  • GitHub Check: Client / Unit
  • GitHub Check: Windows
  • GitHub Check: Linux
  • GitHub Check: release
  • GitHub Check: JS / Lint
  • GitHub Check: release_ui_darwin
🔇 Additional comments (19)
client/server/server.go (1)

1066-1071: LGTM!

The refactoring to use fullStatus.ToProto() method instead of the standalone helper function is cleaner and more idiomatic. The subsequent augmentation of the proto message with events and SSH server state remains correctly in place.

client/cmd/status.go (1)

104-117: LGTM!

The switch from standalone functions to method calls on OutputOverview is clean and consistent. Error handling for JSON/YAML paths is correctly preserved.

client/status/status.go (2)

346-536: LGTM!

The conversion from standalone function to receiver method is clean. All field references have been correctly updated from overview. to o., and the logic remains unchanged.


538-554: LGTM!

The method correctly delegates to parsePeers, parseEvents, and GeneralSummary with appropriate parameters. Using true for all four flags in GeneralSummary ensures the full detailed output includes URLs, relay details, nameserver details, and SSH session information.

client/embed/embed.go (7)

23-23: LGTM - Import added for new functionality.


42-42: LGTM - Recorder field for status tracking.


163-213: LGTM - Improved Start() flow with proper state tracking.

The changes correctly:

  1. Use c.connect != nil as the started indicator (better than c.cancel)
  2. Ensure context cleanup via deferred cancel on failure
  3. Store the cancel function for later use in Stop()

The flow handles both success and failure paths correctly.


217-246: LGTM - Proper cleanup in Stop().

The Stop method now correctly:

  1. Cancels the context before stopping the connect client
  2. Sets c.connect = nil on both paths (context timeout and normal completion)

This ensures consistent state cleanup.


334-353: LGTM - Status method with health probes.

The method correctly captures recorder and connect under lock before releasing it, preventing race conditions. Using waitForResult: false for health probes returns quickly with cached results, which is appropriate for a status query.


355-368: LGTM - Sync response retrieval delegates to engine.

The method properly retrieves the engine via the thread-safe getEngine() helper and delegates to the engine's method which handles cloning.


370-388: LGTM - Log level propagation.

The method correctly:

  1. Validates the level string first (fail fast)
  2. Sets the global logrus level
  3. Propagates to the ConnectClient if available

The lock is properly scoped to capture connect without holding it during the level setting.

client/internal/routemanager/dnsinterceptor/handler.go (3)

566-602: LGTM - Clean abstraction for upstream DNS querying.

The queryUpstreamDNS helper nicely encapsulates the dual-path logic (netstack vs private client) with proper error handling and timeout detection. The inline error response creation at line 598 is valid for SERVFAIL responses.


256-259: Clean refactor of upstream query logic.

The delegation to queryUpstreamDNS simplifies ServeDNS while maintaining correct behavior - the helper handles error responses internally and returns nil to signal completion.


50-50: Type change aligns with netstack integration.

Using iface.WGIface provides access to the GetNet() method required for the netstack-based DNS path.

client/internal/dns/server_test.go (2)

18-18: Import added for interface compliance.

The netstack import is required for the GetNet() return type in the mock.


85-87: Mock returns nil for non-netstack test scenarios.

Returning nil from GetNet() correctly simulates environments without netstack, causing the DNS code to fall back to the private client path. This is appropriate for the existing tests which don't exercise netstack functionality.

client/internal/dns/upstream.go (3)

21-21: Netstack import added for new DNS exchange path.

Required for the new ExchangeWithNetstack function that uses netstack's dial capabilities.


422-438: Well-structured netstack DNS exchange with truncation handling.

The function correctly implements UDP-first with TCP fallback on truncation, mirroring the behavior of ExchangeWithFallback. The simpler return signature (no duration) is appropriate since the callers don't need timing information from this path.


440-469: Solid implementation of netstack-based DNS exchange.

The helper properly manages the connection lifecycle with deferred close, propagates context deadlines, and uses the standard dns.Conn wrapper which handles protocol differences (TCP length-prefixing) automatically. Error messages appropriately include the network type for debugging.

Comment thread client/internal/peer/status.go
Comment thread client/embed/embed.go Outdated
@sonarqubecloud
Copy link
Copy Markdown

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants