[management] move network map logic into new design#4774
[management] move network map logic into new design#4774pascal-fischer merged 25 commits intomainfrom
Conversation
WalkthroughIntroduces a network-map Controller and PeersUpdateManager (update channel), moves gRPC server and conversion logic into a shared grpc package (nbgrpc), updates BuildManager/NewServer/NewTimeBasedAuthSecretsManager signatures to accept the Controller/update manager, extends Sync-related returns with an int64 dns-forwarder port, and rewires caches, request buffering, and tests to use the new controller/update-channel architecture. Changes
Sequence Diagram(s)sequenceDiagram
participant Peer as Peer Client
participant GRPC as nbgrpc.Server
participant AM as AccountManager
participant NMC as network_map.Controller
participant PUM as PeersUpdateManager
participant Store as Data Store
rect rgb(232,245,255)
Note over Peer,GRPC: Login & Sync flow (new wiring)
Peer->>GRPC: Login / Sync request
GRPC->>AM: SyncAndMarkPeer(...)
AM->>NMC: GetValidatedPeerWithMap(...)
NMC->>Store: Read account & peers
Store-->>NMC: Account data
NMC-->>AM: peer, networkMap, checks, dnsFwdPort
AM-->>GRPC: Sync response (includes dnsFwdPort)
GRPC-->>Peer: Encrypted SyncResponse
end
rect rgb(240,255,244)
Note over NMC,PUM: Update propagation
NMC->>PUM: SendUpdate(peerID, UpdateMessage)
PUM-->>Peer: stream UpdateMessage
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Points to focus review on:
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
management/internals/shared/grpc/server.go (1)
161-165: Fix timestamp to use Unix seconds.Line 162 calls
now.Second(), which only yields a 0–59 second-of-minute value. The resulting protobuf timestamp therefore points to 1970-01-01 00:00:SS instead of 24 hours ahead, so any client treating it as an epoch expiry will see an already-expired key. Usenow.Unix()to encode epoch seconds.Apply this diff to correct the timestamp:
- now := time.Now().Add(24 * time.Hour) - secs := int64(now.Second()) + now := time.Now().Add(24 * time.Hour) + secs := now.Unix()
🧹 Nitpick comments (7)
management/internals/shared/grpc/server_test.go (1)
15-106: LGTM! Well-structured test coverage for device authorization flow.The test cases comprehensively cover nil flows, invalid providers, and valid configurations with proper encryption/decryption validation.
Minor: Line 88 uses
context.TODO()—considercontext.Background()for test contexts as it's more conventional and semantically clearer.- context.TODO(), + context.Background(),management/server/http/testing/testing_tools/channel/channel.go (1)
70-73: Stop leaking AccountRequestBuffer goroutines
NewAccountRequestBufferspins up a goroutine that watches the provided context. Usingcontext.Background()means the goroutine outlives the test and can keep touching the store aftercleanupruns. Please wrap the background context incontext.WithCancel, register the cancel witht.Cleanup, and propagate the pattern to the other helpers introduced in this PR so the buffer shuts down when the test finishes.Apply this diff (and mirror it in the other helpers):
- ctx := context.Background() - requestBuffer := server.NewAccountRequestBuffer(ctx, store) + ctx, cancel := context.WithCancel(context.Background()) + t.Cleanup(cancel) + requestBuffer := server.NewAccountRequestBuffer(ctx, store)management/server/mock_server/account_mock.go (1)
181-186: Align the fallback error string with the method name.The new
SyncAndMarkPeerfallback now returns the extraint64, but the error still mentionsMarkPeerConnected, which can be misleading when the mock is exercised. Please update the message to referenceSyncAndMarkPeerso test failures clearly point at the right hook.- return nil, nil, nil, 0, status.Errorf(codes.Unimplemented, "method MarkPeerConnected is not implemented") + return nil, nil, nil, 0, status.Errorf(codes.Unimplemented, "method SyncAndMarkPeer is not implemented")management/server/management_proto_test.go (1)
364-380: Reuse a single validator instance across controller and manager.We now spin up both the controller and the account manager with separate
MockIntegratedValidator{}values. That makes it easy for their behaviors to diverge if a test later customizesValidatePeerFuncon the manager’s copy (the controller would still hold the original no-op instance). Please allocate one mock (or concrete validator) and hand the same value into bothNewControllerandBuildManagerso the stack under test stays coherent.- networkMapController := controller.NewController(ctx, store, metrics, updateManager, requestBuffer, MockIntegratedValidator{}, settingsMockManager, "netbird.selfhosted") - accountManager, err := BuildManager(ctx, store, networkMapController, nil, "", eventStore, nil, false, MockIntegratedValidator{}, metrics, port_forwarding.NewControllerMock(), settingsMockManager, permissionsManager, false) + validator := MockIntegratedValidator{} + networkMapController := controller.NewController(ctx, store, metrics, updateManager, requestBuffer, validator, settingsMockManager, "netbird.selfhosted") + accountManager, err := BuildManager(ctx, store, networkMapController, nil, "", eventStore, nil, false, validator, metrics, port_forwarding.NewControllerMock(), settingsMockManager, permissionsManager, false)client/cmd/testutil_test.go (1)
118-129: Keep the validator wiring closer to production.Here we already build the concrete integrated validator (
iv) for the manager, yet the controller (and the gRPC server) receive brand-new mock instances. That means controller behavior under test may skip the extra checksivperforms. Please reuse the same validator instance (either shareiv, or share a single mock) across the controller, manager, and server setup so the fixture mirrors the real wiring.management/internals/shared/grpc/convertsion_test.go (2)
14-86: Consider enhancing test coverage.The test verifies deterministic behavior and cache population, which is good. However, consider adding:
- Assertions about the actual structure and content of the converted results to verify correctness, not just consistency
- Edge cases: empty configs, nil NameServerGroups, configs without CustomZones
- Verification that the second call with config1 (result3) actually benefits from the cache
These additions would strengthen confidence in the conversion logic.
103-109: Minor: Benchmark naming could be clearer.The "WithoutCache" sub-benchmark actually creates a fresh cache for each iteration rather than operating without caching. Consider renaming to "WithFreshCache" or "WithoutCacheReuse" to more accurately reflect what's being measured.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (62)
client/cmd/testutil_test.go(2 hunks)client/internal/engine_test.go(2 hunks)client/server/server_test.go(2 hunks)go.mod(1 hunks)management/internals/controllers/network_map/controller/cache/dns_config_cache.go(1 hunks)management/internals/controllers/network_map/controller/controller.go(1 hunks)management/internals/controllers/network_map/controller/controller_test.go(1 hunks)management/internals/controllers/network_map/controller/metrics.go(1 hunks)management/internals/controllers/network_map/controller/repository.go(1 hunks)management/internals/controllers/network_map/interface.go(1 hunks)management/internals/controllers/network_map/interface_mock.go(1 hunks)management/internals/controllers/network_map/network_map.go(1 hunks)management/internals/controllers/network_map/update_channel.go(1 hunks)management/internals/controllers/network_map/update_channel/updatechannel.go(6 hunks)management/internals/controllers/network_map/update_channel/updatechannel_test.go(3 hunks)management/internals/controllers/network_map/update_message.go(1 hunks)management/internals/server/boot.go(3 hunks)management/internals/server/controllers.go(4 hunks)management/internals/server/modules.go(1 hunks)management/internals/shared/grpc/conversion.go(1 hunks)management/internals/shared/grpc/convertsion_test.go(1 hunks)management/internals/shared/grpc/loginfilter.go(1 hunks)management/internals/shared/grpc/loginfilter_test.go(1 hunks)management/internals/shared/grpc/server.go(26 hunks)management/internals/shared/grpc/server_test.go(1 hunks)management/internals/shared/grpc/token_mgr.go(6 hunks)management/internals/shared/grpc/token_mgr_test.go(6 hunks)management/server/account.go(11 hunks)management/server/account/manager.go(2 hunks)management/server/account/request_buffer.go(1 hunks)management/server/account_test.go(54 hunks)management/server/dns.go(0 hunks)management/server/dns_test.go(4 hunks)management/server/event_test.go(1 hunks)management/server/group.go(1 hunks)management/server/group_test.go(9 hunks)management/server/holder.go(0 hunks)management/server/http/handler.go(3 hunks)management/server/http/handlers/peers/peers_handler.go(7 hunks)management/server/http/handlers/peers/peers_handler_test.go(7 hunks)management/server/http/testing/testing_tools/channel/channel.go(6 hunks)management/server/management_proto_test.go(3 hunks)management/server/management_test.go(3 hunks)management/server/mock_server/account_mock.go(4 hunks)management/server/nameserver.go(0 hunks)management/server/nameserver_test.go(4 hunks)management/server/networkmap.go(0 hunks)management/server/networks/manager.go(0 hunks)management/server/networks/resources/manager.go(0 hunks)management/server/networks/routers/manager.go(0 hunks)management/server/peer.go(13 hunks)management/server/peer_test.go(30 hunks)management/server/policy.go(0 hunks)management/server/policy_test.go(2 hunks)management/server/posture_checks.go(0 hunks)management/server/posture_checks_test.go(5 hunks)management/server/route.go(0 hunks)management/server/route_test.go(10 hunks)management/server/setupkey_test.go(6 hunks)management/server/user.go(5 hunks)management/server/user_test.go(6 hunks)shared/management/client/client_test.go(2 hunks)
💤 Files with no reviewable changes (10)
- management/server/networks/manager.go
- management/server/networks/resources/manager.go
- management/server/holder.go
- management/server/nameserver.go
- management/server/networkmap.go
- management/server/policy.go
- management/server/dns.go
- management/server/posture_checks.go
- management/server/route.go
- management/server/networks/routers/manager.go
🧰 Additional context used
🧬 Code graph analysis (37)
management/server/http/handler.go (3)
management/internals/controllers/network_map/controller/controller.go (1)
Controller(36-57)management/internals/controllers/network_map/interface.go (1)
Controller(23-39)management/server/http/handlers/peers/peers_handler.go (1)
AddEndpoints(31-37)
management/internals/controllers/network_map/controller/controller_test.go (2)
management/internals/controllers/network_map/interface.go (2)
OldForwarderPort(19-19)DnsForwarderPort(18-18)management/server/mock_server/account_mock.go (1)
MockAccountManager(29-132)
management/internals/shared/grpc/server_test.go (3)
management/internals/server/config/config.go (1)
Provider(17-17)management/internals/shared/grpc/server.go (1)
Server(55-78)encryption/message.go (2)
EncryptMessage(10-24)DecryptMessage(27-40)
management/internals/controllers/network_map/update_channel.go (1)
management/internals/controllers/network_map/update_message.go (1)
UpdateMessage(7-9)
management/internals/shared/grpc/token_mgr.go (3)
management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)management/internals/controllers/network_map/update_channel/updatechannel.go (1)
PeersUpdateManager(16-23)management/internals/controllers/network_map/update_message.go (1)
UpdateMessage(7-9)
management/server/dns_test.go (4)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(67-99)management/server/account.go (1)
BuildManager(172-256)
management/server/route_test.go (7)
management/server/account.go (2)
DefaultAccountManager(64-105)BuildManager(172-256)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)management/internals/controllers/network_map/update_channel/updatechannel.go (2)
PeersUpdateManager(16-23)NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(67-99)management/server/integrated_validator.go (6)
MockIntegratedValidator(125-128)MockIntegratedValidator(153-155)MockIntegratedValidator(157-159)MockIntegratedValidator(161-163)MockIntegratedValidator(165-167)MockIntegratedValidator(169-171)management/server/integrations/port_forwarding/controller.go (1)
NewControllerMock(20-22)
management/internals/shared/grpc/convertsion_test.go (2)
management/internals/controllers/network_map/controller/cache/dns_config_cache.go (1)
DNSConfigCache(10-12)management/internals/controllers/network_map/interface.go (1)
DnsForwarderPort(18-18)
management/internals/server/controllers.go (7)
management/internals/server/server.go (1)
BaseServer(45-68)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)management/internals/controllers/network_map/update_channel/updatechannel.go (2)
PeersUpdateManager(16-23)NewPeersUpdateManager(26-32)management/internals/server/container.go (1)
Create(6-10)management/internals/shared/grpc/token_mgr.go (3)
SecretsManager(27-32)TimeBasedAuthSecretsManager(35-46)NewTimeBasedAuthSecretsManager(50-85)management/internals/controllers/network_map/controller/controller.go (2)
Controller(36-57)NewController(67-99)management/server/account_request_buffer.go (2)
AccountRequestBuffer(27-33)NewAccountRequestBuffer(35-57)
management/internals/server/boot.go (3)
management/server/http/handler.go (1)
NewAPIHandler(54-138)management/internals/shared/grpc/server.go (1)
NewServer(81-142)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)
management/internals/controllers/network_map/controller/repository.go (1)
management/server/store/store.go (1)
LockingStrengthNone(47-47)
management/server/http/handlers/peers/peers_handler.go (2)
management/internals/controllers/network_map/controller/controller.go (1)
Controller(36-57)management/internals/controllers/network_map/interface.go (1)
Controller(23-39)
management/internals/controllers/network_map/update_channel/updatechannel.go (2)
management/internals/controllers/network_map/update_message.go (1)
UpdateMessage(7-9)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)
management/server/user.go (2)
management/server/peer/peer.go (1)
Peer(16-58)management/server/store/store.go (1)
LockingStrengthNone(47-47)
management/internals/controllers/network_map/controller/metrics.go (1)
management/server/telemetry/updatechannel_metrics.go (1)
UpdateChannelMetrics(12-26)
management/server/account/manager.go (3)
management/server/types/peer.go (2)
PeerLogin(21-37)PeerSync(10-18)shared/management/proto/management.pb.go (6)
NetworkMap(1738-1768)NetworkMap(1783-1783)NetworkMap(1798-1800)PeerSystemMeta(917-939)PeerSystemMeta(954-954)PeerSystemMeta(969-971)management/server/types/network.go (1)
NetworkMap(32-41)
management/server/mock_server/account_mock.go (4)
management/server/peer/peer.go (2)
PeerSystemMeta(115-133)Peer(16-58)management/server/types/network.go (1)
NetworkMap(32-41)management/server/posture/checks.go (2)
Checks(37-52)Checks(117-119)management/server/types/peer.go (1)
PeerSync(10-18)
management/internals/server/modules.go (1)
management/server/account.go (1)
BuildManager(172-256)
management/internals/shared/grpc/token_mgr_test.go (2)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/internals/controllers/network_map/update_message.go (1)
UpdateMessage(7-9)
client/server/server_test.go (6)
management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/internals/controllers/network_map/controller/controller.go (1)
NewController(67-99)management/server/account.go (1)
BuildManager(172-256)management/internals/shared/grpc/token_mgr.go (1)
NewTimeBasedAuthSecretsManager(50-85)management/internals/shared/grpc/server.go (1)
NewServer(81-142)
management/server/http/handlers/peers/peers_handler_test.go (2)
management/server/http/handlers/peers/peers_handler.go (1)
Handler(26-29)management/internals/controllers/network_map/interface_mock.go (1)
NewMockController(35-39)
management/server/account.go (3)
management/internals/controllers/network_map/controller/controller.go (1)
Controller(36-57)management/internals/controllers/network_map/interface.go (1)
Controller(23-39)management/server/account/manager.go (1)
Manager(26-127)
management/internals/controllers/network_map/interface.go (1)
dns/dns.go (2)
ForwarderServerPort(25-25)ForwarderClientPort(23-23)
management/internals/shared/grpc/conversion.go (7)
management/server/types/settings.go (1)
ExtraSettings(82-99)management/internals/server/config/config.go (4)
TURNConfig(78-83)DTLS(22-22)HTTP(24-24)HTTPS(25-25)management/internals/controllers/network_map/controller/cache/dns_config_cache.go (1)
DNSConfigCache(10-12)route/route.go (3)
ID(48-48)NetID(54-54)NetworkType(60-60)shared/management/proto/management.pb.go (14)
HostConfig_UDP(179-179)HostConfig_DTLS(183-183)HostConfig_HTTP(181-181)HostConfig_HTTPS(182-182)HostConfig_TCP(180-180)RuleDirection_OUT(88-88)RuleDirection_IN(87-87)RuleAction_DROP(134-134)RuleAction_ACCEPT(133-133)RuleProtocol_ALL(30-30)RuleProtocol_TCP(31-31)RuleProtocol_UDP(32-32)RuleProtocol_ICMP(33-33)RuleProtocol_UNKNOWN(29-29)management/server/types/firewall_rule.go (1)
FirewallRuleDirectionOUT(18-18)management/server/types/policy.go (5)
PolicyTrafficActionDrop(14-14)PolicyRuleProtocolALL(19-19)PolicyRuleProtocolTCP(21-21)PolicyRuleProtocolUDP(23-23)PolicyRuleProtocolICMP(25-25)
management/server/http/testing/testing_tools/channel/channel.go (6)
management/server/http/testing/testing_tools/tools.go (1)
TB(64-70)management/internals/controllers/network_map/update_message.go (1)
UpdateMessage(7-9)management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(67-99)management/server/http/handler.go (1)
NewAPIHandler(54-138)
management/server/management_proto_test.go (11)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(67-99)management/server/integrated_validator.go (6)
MockIntegratedValidator(125-128)MockIntegratedValidator(153-155)MockIntegratedValidator(157-159)MockIntegratedValidator(161-163)MockIntegratedValidator(165-167)MockIntegratedValidator(169-171)management/server/account.go (1)
BuildManager(172-256)management/server/integrations/port_forwarding/controller.go (1)
NewControllerMock(20-22)management/internals/shared/grpc/token_mgr.go (1)
NewTimeBasedAuthSecretsManager(50-85)management/server/peers/ephemeral/manager/ephemeral.go (1)
NewEphemeralManager(52-60)management/internals/shared/grpc/server.go (1)
NewServer(81-142)management/server/peer/peer.go (3)
PeerSystemMeta(115-133)Environment(86-89)Flags(99-112)shared/management/proto/management.pb.go (9)
PeerSystemMeta(917-939)PeerSystemMeta(954-954)PeerSystemMeta(969-971)Environment(673-682)Environment(697-697)Environment(712-714)Flags(797-812)Flags(827-827)Flags(842-844)
management/internals/controllers/network_map/update_channel/updatechannel_test.go (1)
management/internals/controllers/network_map/update_message.go (1)
UpdateMessage(7-9)
client/cmd/testutil_test.go (6)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(67-99)management/server/account.go (1)
BuildManager(172-256)management/internals/shared/grpc/token_mgr.go (1)
NewTimeBasedAuthSecretsManager(50-85)management/internals/shared/grpc/server.go (1)
NewServer(81-142)
management/server/account_test.go (7)
management/internals/controllers/network_map/interface.go (1)
EnvNewNetworkMapBuilder(15-15)management/server/account.go (2)
DefaultAccountManager(64-105)BuildManager(172-256)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)management/internals/controllers/network_map/update_channel/updatechannel.go (2)
PeersUpdateManager(16-23)NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(67-99)management/internals/controllers/network_map/update_message.go (1)
UpdateMessage(7-9)
client/internal/engine_test.go (5)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(67-99)management/server/account.go (1)
BuildManager(172-256)management/internals/shared/grpc/token_mgr.go (1)
NewTimeBasedAuthSecretsManager(50-85)
management/server/peer_test.go (12)
management/internals/controllers/network_map/interface.go (1)
EnvNewNetworkMapBuilder(15-15)management/server/http/testing/testing_tools/tools.go (1)
TB(64-70)management/server/account.go (2)
DefaultAccountManager(64-105)BuildManager(172-256)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)management/internals/controllers/network_map/update_channel/updatechannel.go (2)
PeersUpdateManager(16-23)NewPeersUpdateManager(26-32)management/internals/controllers/network_map/update_message.go (1)
UpdateMessage(7-9)management/internals/shared/grpc/token_mgr.go (1)
Token(48-48)management/internals/controllers/network_map/controller/cache/dns_config_cache.go (1)
DNSConfigCache(10-12)management/internals/shared/grpc/conversion.go (1)
ToSyncResponse(96-139)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(67-99)management/server/integrated_validator.go (6)
MockIntegratedValidator(125-128)MockIntegratedValidator(153-155)MockIntegratedValidator(157-159)MockIntegratedValidator(161-163)MockIntegratedValidator(165-167)MockIntegratedValidator(169-171)
management/internals/controllers/network_map/interface_mock.go (1)
management/internals/controllers/network_map/interface.go (1)
Controller(23-39)
management/server/management_test.go (6)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(67-99)management/server/account.go (1)
BuildManager(172-256)management/internals/shared/grpc/token_mgr.go (1)
NewTimeBasedAuthSecretsManager(50-85)management/internals/shared/grpc/server.go (1)
NewServer(81-142)
management/server/nameserver_test.go (4)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(67-99)management/server/integrated_validator.go (6)
MockIntegratedValidator(125-128)MockIntegratedValidator(153-155)MockIntegratedValidator(157-159)MockIntegratedValidator(161-163)MockIntegratedValidator(165-167)MockIntegratedValidator(169-171)
shared/management/client/client_test.go (5)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(67-99)management/server/account.go (1)
BuildManager(172-256)management/internals/shared/grpc/token_mgr.go (1)
NewTimeBasedAuthSecretsManager(50-85)
management/server/peer.go (3)
management/server/account.go (1)
DefaultAccountManager(64-105)management/server/types/peer.go (1)
PeerSync(10-18)management/server/peer/peer.go (1)
Peer(16-58)
management/internals/controllers/network_map/controller/controller.go (10)
management/internals/controllers/network_map/controller/repository.go (1)
Repository(11-15)management/server/telemetry/accountmanager_metrics.go (1)
AccountManagerMetrics(11-17)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)management/server/account/request_buffer.go (1)
RequestBuffer(9-11)management/server/integrations/integrated_validator/interface.go (1)
IntegratedValidator(12-23)management/server/types/holder.go (2)
Holder(8-11)NewHolder(13-17)management/internals/controllers/network_map/interface.go (5)
EnvNewNetworkMapBuilder(15-15)EnvNewNetworkMapAccounts(16-16)DnsForwarderPortMinVersion(20-20)OldForwarderPort(19-19)DnsForwarderPort(18-18)management/internals/controllers/network_map/controller/cache/dns_config_cache.go (1)
DNSConfigCache(10-12)management/internals/shared/grpc/conversion.go (1)
ToSyncResponse(96-139)management/server/types/networkmapbuilder.go (1)
NetworkMapCache(31-57)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (25)
- GitHub Check: Management / Benchmark (amd64, postgres)
- GitHub Check: Management / Integration (amd64, postgres)
- GitHub Check: Management / Unit (amd64, postgres)
- GitHub Check: Management / Benchmark (API) (amd64, sqlite)
- GitHub Check: Management / Benchmark (API) (amd64, postgres)
- GitHub Check: Management / Unit (amd64, mysql)
- GitHub Check: Management / Integration (amd64, sqlite)
- GitHub Check: Management / Unit (amd64, sqlite)
- GitHub Check: Relay / Unit (386)
- GitHub Check: Management / Benchmark (amd64, sqlite)
- GitHub Check: Relay / Unit (amd64, -race)
- GitHub Check: Client / Unit (386)
- GitHub Check: Client / Unit (amd64)
- GitHub Check: Client (Docker) / Unit
- GitHub Check: Android / Build
- GitHub Check: iOS / Build
- GitHub Check: Client / Unit
- GitHub Check: release
- GitHub Check: release_ui_darwin
- GitHub Check: Darwin
- GitHub Check: Linux
- GitHub Check: Windows
- GitHub Check: Client / Unit
- GitHub Check: JS / Lint
- GitHub Check: Client / Unit
🔇 Additional comments (22)
management/server/policy_test.go (2)
1167-1170: LGTM! Clean refactoring to use the new updateManager.The channel lifecycle is properly managed with the cleanup function, and the separation of concerns between
managerandupdateManagerimproves the architecture.
1138-1138: setupNetworkMapTest signature verified and refactoring is correct.The setupNetworkMapTest function in management/server/account_test.go returns the expected six values: (*DefaultAccountManager, *update_channel.PeersUpdateManager, *types.Account, *nbpeer.Peer, *nbpeer.Peer, *nbpeer.Peer). The unpacking on line 1138 of policy_test.go is consistent with this signature, and the subsequent usage of
updateManageron lines 1167 and 1169 properly reflects the refactored architecture where the peer update manager is now a separate component.management/server/event_test.go (1)
31-31: No action needed—updateManager is unused in this test.The test
TestDefaultAccountManager_GetEventsonly uses themanagervalue and itseventStorefor cleanup. It never callsupdateManager.CreateChannel()or any other updateManager methods, so no cleanup is required for updateManager. The blank identifier usage is appropriate here.Likely an incorrect or invalid review comment.
management/server/posture_checks_test.go (5)
126-126: Clean integration with the new updateManager.The test correctly captures and uses the
updateManagerreturned fromsetupNetworkMapTest.
150-153: Proper channel lifecycle management.Channel creation and cleanup are correctly handled using
updateManager.CreateChannelandupdateManager.CloseChannelwith appropriate cleanup registration.
362-365: Proper channel lifecycle management for peer2.Channel creation and cleanup are correctly handled for the second peer using
updateManagermethods with proper cleanup registration.
448-448: Appropriately ignoring unused updateManager.Since this test validates the
arePostureCheckChangesAffectPeershelper logic without setting up update channels, ignoring theupdateManageris correct. However, ensure the resource cleanup concern raised at line 24 is addressed.
24-24: ****The
PeersUpdateManagerallocates no persistent resources that require explicit cleanup. The constructor only initializes an empty map and mutex—no goroutines are started. Channels are created lazily per peer and closed only when individual peers are closed viaCloseChannel(). SinceTestDefaultAccountManager_PostureCheckdoesn't register any peers with the update manager, ignoring it poses no resource leak risk.Likely an incorrect or invalid review comment.
management/internals/shared/grpc/loginfilter_test.go (1)
127-133: No action requiredThe go.mod already specifies
go 1.23.0, which fully supports thefor i := range limitsyntax. The code is compatible with the project's current Go toolchain level.management/internals/server/modules.go (1)
69-69: LGTM! Controller-based wiring aligns with the new architecture.The updated BuildManager invocation correctly passes the NetworkMapController and removes the dnsDomain parameter, consistent with the architectural shift to centralize network map logic in a dedicated controller.
management/internals/controllers/network_map/update_message.go (1)
7-9: LGTM! Clean data carrier type.The UpdateMessage struct provides a straightforward wrapper for SyncResponse messages used in the new update channel flow.
management/server/account/request_buffer.go (1)
9-11: LGTM! Clean interface for backpressure-aware account retrieval.The RequestBuffer interface provides a focused abstraction that enables the network map controller to retrieve accounts with backpressure handling.
management/server/group.go (1)
367-367: LGTM! DNS domain resolution correctly delegated to the controller.The change to use
networkMapController.GetDNSDomain(settings)properly routes DNS domain resolution through the new controller-based architecture.management/internals/shared/grpc/token_mgr_test.go (3)
16-17: LGTM! Imports updated for new controller architecture.The addition of network_map and update_channel imports correctly supports the refactored update manager and message types.
36-36: LGTM! Update manager initialization uses new constructor.Correctly updated to use
update_channel.NewPeersUpdateManager(nil)from the new package structure.
121-121: LGTM! Message type updated to use network_map package.The slice type
[]*network_map.UpdateMessagecorrectly reflects the new update message structure from the refactored architecture.management/internals/controllers/network_map/controller/cache/dns_config_cache.go (1)
26-31: LGTM! Cache setter with appropriate nil safety.The SetNameServerGroup method correctly handles nil cache receivers and safely stores values in the sync.Map.
management/server/setupkey_test.go (2)
21-21: LGTM! Test helper updated for new return signature.The test correctly captures the additional updateManager return value (ignored here as it's not needed for this test case).
399-426: LGTM! Test correctly uses the new update manager flow.The test properly utilizes the updateManager returned by setupNetworkMapTest to create and close channels, aligning with the refactored update channel architecture.
management/internals/controllers/network_map/interface.go (1)
14-39: LGTM — interface and constants look clean.Thanks for centralizing the controller contract and exporting the forwarder port constants; it matches the upstream DNS switch at v0.59.0 and keeps tests from hardcoding magic numbers.
management/server/peer_test.go (1)
1089-1236: Nice verification of the DNS forwarder port propagation.The new assertions around
response.NetworkMap.DNSConfig.ForwarderPortensure we keep emitting the management-provided forwarder port that switches to22054once everyone runs ≥ v0.59.0, matching the docs.management/internals/shared/grpc/convertsion_test.go (1)
113-150: LGTM!The test data generator is well-structured and provides appropriate synthetic data for benchmarking. The hardcoded IP addresses and domains are acceptable for test purposes.
| go.opentelemetry.io/otel/exporters/prometheus v0.48.0 | ||
| go.opentelemetry.io/otel/metric v1.35.0 | ||
| go.opentelemetry.io/otel/sdk/metric v1.35.0 | ||
| go.uber.org/mock v0.5.0 |
There was a problem hiding this comment.
🧩 Analysis chain
Verify go.uber.org/mock v0.5.0 is current and appropriate for the refactoring.
Promoting go.uber.org/mock to a direct dependency aligns with the PR's new controller-based mocking infrastructure. However, ensure this version is maintained, compatible with your test suite, and free from known issues.
🌐 Web query:
Is go.uber.org/mock v0.5.0 the current stable version? Are there any known security or compatibility issues?
💡 Result:
Short answer: No — v0.5.0 is not the most recent release. There have been patch releases (e.g. v0.5.2 on 2025‑04‑28) and further activity; the project is actively maintained and supports the two most recent Go releases. [1][2][5]
Security/compatibility summary:
- Known security advisories/CVEs: none reported for go.uber.org/mock in public vulnerability databases (Snyk shows 0 direct vulnerabilities). [3]
- Compatibility/bug notes: there are active issues and recent fixes around Go generics/type-alias handling, mockgen generation edge-cases, and other functional bugs (open issues and recent patch releases address some of these). If you rely on code-generation for newer Go versions (generics/aliases), prefer the latest patch release (>= v0.5.2) and test mockgen output. [4][2][5]
Recommendations:
- Use the latest tagged release (not v0.5.0) — run
go list -m -versions go.uber.org/mockor check the repo releases and then update your go.mod to that version. - Upgrade to the latest patch (v0.5.2 or newer) if you hit issues with type aliases/generics; run your test suite after upgrading. [2][4][5]
Sources:
[1] uber-go/mock (GitHub repo).
[2] Release notes / v0.5.2 (published 2025‑04‑28).
[3] Snyk package advisory for github.com/uber-go/mock (no direct vulnerabilities).
[4] uber-go/mock GitHub issues (open compatibility/bug reports).
[5] pkg.go.dev entry for go.uber.org/mock (module details, supported Go versions).
Update go.uber.org/mock to v0.5.2 or the latest stable version.
v0.5.0 is not the current stable version; newer releases (e.g., v0.5.2 released 2025‑04‑28) are available. Recent patch releases address functional bugs and compatibility issues with Go generics and type aliases. Given this PR's major refactoring with new mocking infrastructure, update to the latest patch release to ensure compatibility and avoid known issues.
🤖 Prompt for AI Agents
In go.mod around line 102, the dependency go.uber.org/mock is pinned to v0.5.0
which is outdated; update the module line to v0.5.2 (or the latest stable patch)
and then run `go get go.uber.org/mock@v0.5.2` (or `@latest`) followed by `go mod
tidy` to refresh go.sum; finally run `go test ./...` and fix any minor
compatibility issues that surface from the mock package upgrade.
| func (c *DNSConfigCache) GetNameServerGroup(key string) (*proto.NameServerGroup, bool) { | ||
| if c == nil { | ||
| return nil, false | ||
| } | ||
| if value, ok := c.NameServerGroups.Load(key); ok { | ||
| return value.(*proto.NameServerGroup), true | ||
| } | ||
| return nil, false | ||
| } |
There was a problem hiding this comment.
Unchecked type assertion could panic.
Line 20 performs an unchecked type assertion value.(*proto.NameServerGroup) which will panic if an incorrect type is stored in the cache. Consider using the two-value form or adding a type check.
Apply this diff to add a type check:
func (c *DNSConfigCache) GetNameServerGroup(key string) (*proto.NameServerGroup, bool) {
if c == nil {
return nil, false
}
if value, ok := c.NameServerGroups.Load(key); ok {
- return value.(*proto.NameServerGroup), true
+ if nsg, ok := value.(*proto.NameServerGroup); ok {
+ return nsg, true
+ }
}
return nil, false
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| func (c *DNSConfigCache) GetNameServerGroup(key string) (*proto.NameServerGroup, bool) { | |
| if c == nil { | |
| return nil, false | |
| } | |
| if value, ok := c.NameServerGroups.Load(key); ok { | |
| return value.(*proto.NameServerGroup), true | |
| } | |
| return nil, false | |
| } | |
| func (c *DNSConfigCache) GetNameServerGroup(key string) (*proto.NameServerGroup, bool) { | |
| if c == nil { | |
| return nil, false | |
| } | |
| if value, ok := c.NameServerGroups.Load(key); ok { | |
| if nsg, ok := value.(*proto.NameServerGroup); ok { | |
| return nsg, true | |
| } | |
| } | |
| return nil, false | |
| } |
🤖 Prompt for AI Agents
In
management/internals/controllers/network_map/controller/cache/dns_config_cache.go
around lines 15 to 23, the code does an unchecked type assertion
value.(*proto.NameServerGroup) which can panic if the stored value is not the
expected type; change it to use the two-value type assertion (v, okType :=
value.(*proto.NameServerGroup)) and only return v, true when okType is true,
otherwise return nil, false (optionally log or handle the type mismatch) so the
function never panics on bad cache entries.
| for range peersCount { | ||
| //nolint | ||
| am.DeletePeer(context.Background(), empty, empty, empty) | ||
| } | ||
| time.Sleep(100 * time.Millisecond) | ||
|
|
||
| assert.Equal(t, peersCount, int(deletedPeers.Load()), "Expected all peers to be deleted") | ||
| assert.Equal(t, peersCount, int(updatePeersDeleted.Load()), "Expected all peers to be updated in the buffer") | ||
| assert.GreaterOrEqual(t, uapLastRun.Load(), dpLastRun.Load(), "Expected update account peers to run after delete peer") | ||
|
|
||
| totalNewRuns = int(updatePeersRuns.Load()) | ||
| }) | ||
|
|
||
| t.Run("old approach", func(t *testing.T) { | ||
| updatePeersRuns.Store(0) | ||
| updatePeersDeleted.Store(0) | ||
| deletedPeers.Store(0) | ||
|
|
||
| var mustore sync.Map | ||
| bufupd := func(ctx context.Context, accountID string) { | ||
| mu, _ := mustore.LoadOrStore(accountID, &sync.Mutex{}) | ||
| b := mu.(*sync.Mutex) | ||
|
|
||
| if !b.TryLock() { | ||
| return | ||
| } | ||
|
|
||
| go func() { | ||
| time.Sleep(updateAccountInterval) | ||
| b.Unlock() | ||
| uap(ctx, accountID) | ||
| }() | ||
| } | ||
| dp := func(ctx context.Context, accountID, peerID, userID string) error { | ||
| deletedPeers.Add(1) | ||
| dpLastRun.Store(time.Now().UnixMilli()) | ||
| time.Sleep(10 * time.Millisecond) | ||
| bufupd(ctx, accountID) | ||
| return nil | ||
| } | ||
|
|
||
| am := mock_server.MockAccountManager{ | ||
| UpdateAccountPeersFunc: uap, | ||
| BufferUpdateAccountPeersFunc: bufupd, | ||
| DeletePeerFunc: dp, | ||
| } | ||
| empty := "" | ||
| for range peersCount { | ||
| //nolint | ||
| am.DeletePeer(context.Background(), empty, empty, empty) | ||
| } | ||
| time.Sleep(100 * time.Millisecond) |
There was a problem hiding this comment.
Fix invalid range usage to restore compilation
Using for range peersCount does not compile because integers aren't iterable; the tests will fail to build. Please switch to a classic counted loop here (and in the identical loops below).
- empty := ""
- for range peersCount {
- //nolint
- am.DeletePeer(context.Background(), empty, empty, empty)
- }
+ empty := ""
+ for i := 0; i < peersCount; i++ {
+ am.DeletePeer(context.Background(), empty, empty, empty)
+ }Apply the same correction in the "old approach" section and any other repetition so the test suite compiles again.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
management/internals/controllers/network_map/controller/controller_test.go
around lines 183 to 234: the test uses an invalid "for range peersCount" which
doesn't compile because integers aren't iterable; replace each occurrence with a
classic counted loop (e.g., for i := 0; i < peersCount; i++ { ... }) in the
current "new approach" and the "old approach" sections (and any other identical
repetitions) so the test file compiles again.
| @@ -0,0 +1,150 @@ | |||
| package grpc | |||
There was a problem hiding this comment.
Critical: Fix filename typo.
The filename contains a typo: convertsion_test.go should be conversion_test.go. This typo will impact code discoverability and should be corrected before merging.
🤖 Prompt for AI Agents
In management/internals/shared/grpc/convertsion_test.go around line 1, the
filename contains a typo ("convertsion_test.go"); rename the file to
"conversion_test.go" and update any references (CI/test scripts, build tags, or
tooling) that mention the old filename so tests are discovered and run
correctly.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
management/internals/controllers/network_map/controller/controller.go (1)
535-559: Critical: Fix the buffered update interval units.The issue flagged in the previous review is still present.
initialIntervalis stored as a plain integer (1 orinterval*10), but when loaded and cast totime.Durationat lines 303 and 308, it's interpreted as nanoseconds rather than milliseconds. This causes the buffer to trigger almost immediately instead of throttling updates properly.Apply this diff to fix the unit handling:
- var initialInterval int64 + var initialInterval time.Duration intervalStr := os.Getenv("NB_PEER_UPDATE_INTERVAL_MS") interval, err := strconv.Atoi(intervalStr) if err != nil { - initialInterval = 1 + initialInterval = time.Millisecond log.WithContext(ctx).Warnf("failed to parse peer update interval, using default value %dms: %v", initialInterval, err) } else { - initialInterval = int64(interval) * 10 + initialInterval = time.Duration(interval*10) * time.Millisecond go func() { startupPeriodStr := os.Getenv("NB_PEER_UPDATE_STARTUP_PERIOD_S") startupPeriod, err := strconv.Atoi(startupPeriodStr) if err != nil { startupPeriod = 1 log.WithContext(ctx).Warnf("failed to parse peer update startup period, using default value %ds: %v", startupPeriod, err) } time.Sleep(time.Duration(startupPeriod) * time.Second) - c.updateAccountPeersBufferInterval.Store(int64(time.Duration(interval) * time.Millisecond)) - log.WithContext(ctx).Infof("set peer update buffer interval to %dms", interval) + duration := time.Duration(interval) * time.Millisecond + c.updateAccountPeersBufferInterval.Store(int64(duration)) + log.WithContext(ctx).Infof("set peer update buffer interval to %dms", duration/time.Millisecond) }() } - c.updateAccountPeersBufferInterval.Store(initialInterval) - log.WithContext(ctx).Infof("set peer update buffer interval to %dms", initialInterval) + c.updateAccountPeersBufferInterval.Store(int64(initialInterval)) + log.WithContext(ctx).Infof("set peer update buffer interval to %dms", initialInterval/time.Millisecond)
🧹 Nitpick comments (1)
management/internals/controllers/network_map/controller/controller.go (1)
155-159: Remove commented-out code or track with issues.Commented-out proxy network map code appears throughout the file (lines 155-159, 201, 260, 394, 688-692, 702-705). Consider either removing it entirely or opening a tracked issue if this functionality is planned.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
management/internals/controllers/network_map/controller/controller.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
management/internals/controllers/network_map/controller/controller.go (11)
management/internals/controllers/network_map/controller/repository.go (1)
Repository(11-15)management/server/telemetry/accountmanager_metrics.go (1)
AccountManagerMetrics(11-17)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)management/internals/controllers/network_map/update_channel/updatechannel.go (1)
PeersUpdateManager(16-23)management/server/account/request_buffer.go (1)
RequestBuffer(9-11)management/server/integrations/integrated_validator/interface.go (1)
IntegratedValidator(12-23)management/server/types/holder.go (2)
Holder(8-11)NewHolder(13-17)management/internals/controllers/network_map/interface.go (5)
EnvNewNetworkMapBuilder(15-15)EnvNewNetworkMapAccounts(16-16)DnsForwarderPortMinVersion(20-20)OldForwarderPort(19-19)DnsForwarderPort(18-18)management/internals/shared/grpc/conversion.go (1)
ToSyncResponse(96-139)management/internals/controllers/network_map/update_message.go (1)
UpdateMessage(7-9)management/server/types/networkmapbuilder.go (1)
NetworkMapCache(31-57)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (28)
- GitHub Check: Management / Integration (amd64, sqlite)
- GitHub Check: Management / Integration (amd64, postgres)
- GitHub Check: Management / Unit (amd64, sqlite)
- GitHub Check: Management / Benchmark (API) (amd64, postgres)
- GitHub Check: Management / Benchmark (amd64, postgres)
- GitHub Check: Management / Benchmark (API) (amd64, sqlite)
- GitHub Check: Management / Unit (amd64, mysql)
- GitHub Check: Management / Unit (amd64, postgres)
- GitHub Check: Management / Benchmark (amd64, sqlite)
- GitHub Check: Signal / Unit (amd64)
- GitHub Check: Client / Unit (amd64)
- GitHub Check: Signal / Unit (386)
- GitHub Check: Relay / Unit (386)
- GitHub Check: Client / Unit (386)
- GitHub Check: Relay / Unit (amd64, -race)
- GitHub Check: Client (Docker) / Unit
- GitHub Check: Client / Unit
- GitHub Check: Client / Unit
- GitHub Check: Darwin
- GitHub Check: Linux
- GitHub Check: Windows
- GitHub Check: release_ui
- GitHub Check: release_ui_darwin
- GitHub Check: release
- GitHub Check: JS / Lint
- GitHub Check: Android / Build
- GitHub Check: iOS / Build
- GitHub Check: Client / Unit
🔇 Additional comments (6)
management/internals/controllers/network_map/controller/controller.go (6)
1-99: LGTM! Well-structured controller initialization.The constructor properly initializes the controller with all necessary dependencies, and the struct design appropriately uses thread-safe primitives (atomic.Int64, sync.Map) for concurrent access. The experimental feature flag handling via environment variables is clear and includes proper error handling.
276-312: LGTM! Effective buffering mechanism.The buffering logic correctly uses TryLock to prevent concurrent updates and schedules follow-up updates when needed. The atomic operations ensure thread-safety.
563-591: LGTM! Correct version-based port selection.The version comparison logic properly handles the migration from the old forwarder port to the new one, including special handling for development versions and backward compatibility when peers lack version information.
314-512: LGTM! Helper methods are well-structured.The helper methods for peer deletion, validation, network map retrieval, and DNS domain resolution are correctly implemented with appropriate error handling and delegation to underlying services.
515-635: LGTM! Posture check logic is correct.The posture check collection and policy validation logic properly iterates through policies and groups, with appropriate error handling when posture checks or groups are not found.
637-716: LGTM! Event handlers and connection management are correct.The event handlers properly delegate to cache management methods with appropriate experimental feature flag checks, and the connection management methods correctly delegate to the update manager.
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (1)
management/internals/controllers/network_map/controller/controller.go (1)
535-559: CRITICAL: Fix the buffer interval units (still unresolved from previous review).This issue was flagged in a previous review but remains unaddressed. The
initialIntervalvariable stores plain integers (1 or interval*10), not durations. When these are cast totime.Durationat line 556, you get nanoseconds instead of milliseconds. For example, ifinitialInterval = 1, line 556 stores 1 nanosecond instead of 1 millisecond, causingBufferUpdateAccountPeersto spin continuously and hammer the CPU.Apply this diff to fix the units:
- var initialInterval int64 + var initialInterval time.Duration intervalStr := os.Getenv("NB_PEER_UPDATE_INTERVAL_MS") interval, err := strconv.Atoi(intervalStr) if err != nil { - initialInterval = 1 + initialInterval = time.Millisecond log.WithContext(ctx).Warnf("failed to parse peer update interval, using default value %dms: %v", initialInterval, err) } else { - initialInterval = int64(interval) * 10 + initialInterval = time.Duration(interval*10) * time.Millisecond go func() { startupPeriodStr := os.Getenv("NB_PEER_UPDATE_STARTUP_PERIOD_S") startupPeriod, err := strconv.Atoi(startupPeriodStr) if err != nil { startupPeriod = 1 log.WithContext(ctx).Warnf("failed to parse peer update startup period, using default value %ds: %v", startupPeriod, err) } time.Sleep(time.Duration(startupPeriod) * time.Second) - c.updateAccountPeersBufferInterval.Store(int64(time.Duration(interval) * time.Millisecond)) - log.WithContext(ctx).Infof("set peer update buffer interval to %dms", interval) + duration := time.Duration(interval) * time.Millisecond + c.updateAccountPeersBufferInterval.Store(int64(duration)) + log.WithContext(ctx).Infof("set peer update buffer interval to %dms", duration/time.Millisecond) }() } - c.updateAccountPeersBufferInterval.Store(int64(time.Duration(initialInterval) * time.Millisecond)) - log.WithContext(ctx).Infof("set peer update buffer interval to %dms", initialInterval) + c.updateAccountPeersBufferInterval.Store(int64(initialInterval)) + log.WithContext(ctx).Infof("set peer update buffer interval to %dms", initialInterval/time.Millisecond) }
🧹 Nitpick comments (4)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
174-178: Consider adding metrics collection for consistency.All other public methods in this file (
SendUpdate,CreateChannel,CloseChannel,HasChannel,GetAllConnectedPeers) collect duration metrics, butCountStreamsdoes not. Adding metrics here would maintain consistency with the established instrumentation pattern.Example:
func (p *PeersUpdateManager) CountStreams() int { + start := time.Now() + p.channelsMux.RLock() - defer p.channelsMux.RUnlock() - return len(p.peerChannels) + defer func() { + p.channelsMux.RUnlock() + if p.metrics != nil { + p.metrics.UpdateChannelMetrics().CountStreamsMethodDuration(time.Since(start)) + } + }() + + return len(p.peerChannels) }management/internals/shared/grpc/conversion_test.go (2)
14-86: Add assertions to validate the actual structure of converted results.The test correctly verifies caching behavior (same input → same output) and differentiation (different inputs → different outputs). However, it doesn't validate the actual content of the converted protocol DNS config. Consider adding assertions to verify:
- ServiceEnable flag is correctly converted
- CustomZones (domains, records) are correctly translated
- NameServerGroups (IDs, names, name servers, IPs, ports) are properly mapped
- The DNS forwarder port from
network_map.DnsForwarderPortis correctly includedThis would ensure the conversion produces correctly structured protocol buffers, not just consistent ones.
Example assertions to add after line 61:
// Validate result1 structure if result1.ServiceEnable != config1.ServiceEnable { t.Errorf("ServiceEnable mismatch: expected %v, got %v", config1.ServiceEnable, result1.ServiceEnable) } if len(result1.CustomZones) != len(config1.CustomZones) { t.Errorf("CustomZones count mismatch: expected %d, got %d", len(config1.CustomZones), len(result1.CustomZones)) } if len(result1.NameServerGroups) != len(config1.NameServerGroups) { t.Errorf("NameServerGroups count mismatch: expected %d, got %d", len(config1.NameServerGroups), len(result1.NameServerGroups)) }
88-111: Consider adding memory allocation reporting for more comprehensive benchmarking.The benchmark design is sound, correctly testing both cached and uncached scenarios across multiple data sizes. To gain additional insight into memory efficiency, consider calling
b.ReportAllocs()in each benchmark sub-test.Example:
b.Run(fmt.Sprintf("WithCache-Size%d", size), func(b *testing.B) { cache := &cache.DNSConfigCache{} + b.ReportAllocs() b.ResetTimer() for i := 0; i < b.N; i++ {management/internals/controllers/network_map/controller/controller.go (1)
677-680: Remove unused groups variable.The
groupsmap is built but never used in the function.Apply this diff:
- groups := make(map[string][]string) - for groupID, group := range account.Groups { - groups[groupID] = group.Peers - } - validatedPeers, err := c.integratedPeerValidator.GetValidatedPeers(ctx, account.Id, maps.Values(account.Groups), maps.Values(account.Peers), account.Settings.Extra)
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (3)
management/internals/controllers/network_map/controller/controller.go(1 hunks)management/internals/controllers/network_map/update_channel/updatechannel.go(6 hunks)management/internals/shared/grpc/conversion_test.go(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (3)
management/internals/shared/grpc/conversion_test.go (2)
management/internals/controllers/network_map/controller/cache/dns_config_cache.go (1)
DNSConfigCache(10-12)management/internals/controllers/network_map/interface.go (1)
DnsForwarderPort(18-18)
management/internals/controllers/network_map/update_channel/updatechannel.go (2)
management/internals/controllers/network_map/update_message.go (1)
UpdateMessage(7-9)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)
management/internals/controllers/network_map/controller/controller.go (8)
management/internals/controllers/network_map/controller/repository.go (1)
Repository(11-15)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)management/server/account/request_buffer.go (1)
RequestBuffer(9-11)management/server/integrations/integrated_validator/interface.go (1)
IntegratedValidator(12-23)management/server/types/holder.go (2)
Holder(8-11)NewHolder(13-17)management/internals/controllers/network_map/interface.go (5)
EnvNewNetworkMapBuilder(15-15)EnvNewNetworkMapAccounts(16-16)DnsForwarderPortMinVersion(20-20)OldForwarderPort(19-19)DnsForwarderPort(18-18)management/internals/controllers/network_map/controller/cache/dns_config_cache.go (1)
DNSConfigCache(10-12)management/internals/shared/grpc/conversion.go (1)
ToSyncResponse(96-139)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (27)
- GitHub Check: Management / Integration (amd64, postgres)
- GitHub Check: Management / Integration (amd64, sqlite)
- GitHub Check: Management / Benchmark (API) (amd64, postgres)
- GitHub Check: Management / Benchmark (API) (amd64, sqlite)
- GitHub Check: Management / Unit (amd64, mysql)
- GitHub Check: Management / Unit (amd64, sqlite)
- GitHub Check: Management / Unit (amd64, postgres)
- GitHub Check: Management / Benchmark (amd64, postgres)
- GitHub Check: Relay / Unit (amd64, -race)
- GitHub Check: Management / Benchmark (amd64, sqlite)
- GitHub Check: Relay / Unit (386)
- GitHub Check: Client / Unit (amd64)
- GitHub Check: Signal / Unit (386)
- GitHub Check: Client / Unit (386)
- GitHub Check: Client (Docker) / Unit
- GitHub Check: Client / Unit
- GitHub Check: Linux
- GitHub Check: Windows
- GitHub Check: Darwin
- GitHub Check: Android / Build
- GitHub Check: iOS / Build
- GitHub Check: Client / Unit
- GitHub Check: release_ui_darwin
- GitHub Check: release
- GitHub Check: release_ui
- GitHub Check: JS / Lint
- GitHub Check: Client / Unit
🔇 Additional comments (8)
management/internals/controllers/network_map/update_channel/updatechannel.go (6)
1-12: LGTM! Clean import of the centralized network_map package.The addition of the network_map package import correctly supports the refactor to use a centralized
UpdateMessagetype instead of a local definition.
16-23: LGTM! Struct field correctly updated to use centralized type.The
peerChannelsmap now referencesnetwork_map.UpdateMessage, which aligns with the refactor to consolidate type definitions.
26-32: LGTM! Constructor initialization consistent with struct changes.The map initialization correctly uses the
network_map.UpdateMessagetype, matching the struct field definition.
34-60: LGTM! Method signature updated correctly with proper thread safety.The
SendUpdatemethod signature now accepts*network_map.UpdateMessage, and the implementation maintains proper read-lock usage for thread-safe map access.
62-88: LGTM! Channel creation updated consistently with proper locking.The
CreateChannelmethod now returns and creates channels typed to*network_map.UpdateMessage, and maintains proper write-lock usage for thread-safe map manipulation.
174-178: Critical race condition properly resolved.The
CountStreamsmethod now correctly acquires the read lock before accessingpeerChannels, addressing the data race flagged in the previous review. The implementation properly usesRLock/RUnlockto guard the map read.management/internals/shared/grpc/conversion_test.go (2)
1-12: LGTM!Package declaration and imports are appropriate for a DNS config conversion test file.
113-150: LGTM!The helper function appropriately generates test data for benchmarking purposes. Using consistent IPs and RData across entries is acceptable for performance testing, as it focuses on conversion logic rather than data variety.
| for _, peer := range account.Peers { | ||
| if !c.peersUpdateManager.HasChannel(peer.ID) { | ||
| log.WithContext(ctx).Tracef("peer %s doesn't have a channel, skipping network map update", peer.ID) | ||
| continue | ||
| } | ||
|
|
||
| wg.Add(1) | ||
| semaphore <- struct{}{} | ||
| go func(p *nbpeer.Peer) { | ||
| defer wg.Done() | ||
| defer func() { <-semaphore }() | ||
|
|
||
| start := time.Now() | ||
|
|
||
| postureChecks, err := c.getPeerPostureChecks(account, p.ID) | ||
| if err != nil { | ||
| log.WithContext(ctx).Debugf("failed to get posture checks for peer %s: %v", p.ID, err) | ||
| return | ||
| } | ||
|
|
||
| c.metrics.CountCalcPostureChecksDuration(time.Since(start)) | ||
| start = time.Now() | ||
|
|
||
| var remotePeerNetworkMap *types.NetworkMap | ||
|
|
||
| if c.experimentalNetworkMap(accountID) { | ||
| remotePeerNetworkMap = c.getPeerNetworkMapExp(ctx, p.AccountID, p.ID, approvedPeersMap, customZone, c.accountManagerMetrics) | ||
| } else { | ||
| remotePeerNetworkMap = account.GetPeerNetworkMap(ctx, p.ID, customZone, approvedPeersMap, resourcePolicies, routers, c.accountManagerMetrics) | ||
| } | ||
|
|
||
| c.metrics.CountCalcPeerNetworkMapDuration(time.Since(start)) | ||
|
|
||
| // TODO: Proxy map | ||
|
|
||
| peerGroups := account.GetPeerGroups(p.ID) | ||
| start = time.Now() | ||
| update := grpc.ToSyncResponse(ctx, nil, p, nil, nil, remotePeerNetworkMap, dnsDomain, postureChecks, dnsCache, account.Settings, extraSetting, maps.Keys(peerGroups), dnsFwdPort) | ||
| c.metrics.CountToSyncResponseDuration(time.Since(start)) | ||
|
|
||
| c.peersUpdateManager.SendUpdate(ctx, p.ID, &network_map.UpdateMessage{Update: update}) | ||
| }(peer) | ||
| } |
There was a problem hiding this comment.
Prevent context cancellation from interrupting peer updates.
The parent ctx is captured by each goroutine. If the caller's context is cancelled (e.g., due to a timeout or shutdown), peer update operations may fail mid-flight, leaving some peers with stale network maps.
Consider using a detached context for the goroutines:
for _, peer := range account.Peers {
if !c.peersUpdateManager.HasChannel(peer.ID) {
log.WithContext(ctx).Tracef("peer %s doesn't have a channel, skipping network map update", peer.ID)
continue
}
wg.Add(1)
semaphore <- struct{}{}
- go func(p *nbpeer.Peer) {
+ go func(p *nbpeer.Peer, parentCtx context.Context) {
defer wg.Done()
defer func() { <-semaphore }()
+ ctx := context.WithoutCancel(parentCtx)
start := time.Now()
postureChecks, err := c.getPeerPostureChecks(account, p.ID)
// ... rest of goroutine
- }(peer)
+ }(peer, ctx)
}🤖 Prompt for AI Agents
management/internals/controllers/network_map/controller/controller.go around
lines 168-210: each goroutine currently captures the parent ctx so cancelling
the caller can abort in-flight peer updates; create and use a detached context
inside the goroutine (e.g., derive a new context from context.Background() or
context.WithCancel(context.Background())) and use that for all operations and
logging within the goroutine; if you need to preserve tracing/request metadata,
copy required values from the parent ctx into the new context before using
log.WithContext and other calls; ensure the new context is used in
getPeerPostureChecks, getPeerNetworkMap/getPeerNetworkMapExp, ToSyncResponse and
SendUpdate calls so cancellation of the original ctx does not interrupt these
per-peer updates.
| go func() { | ||
| defer b.mu.Unlock() | ||
| _ = c.UpdateAccountPeers(ctx, accountID) | ||
| if !b.update.Load() { | ||
| return | ||
| } | ||
| b.update.Store(false) | ||
| if b.next == nil { | ||
| b.next = time.AfterFunc(time.Duration(c.updateAccountPeersBufferInterval.Load()), func() { | ||
| _ = c.UpdateAccountPeers(ctx, accountID) | ||
| }) | ||
| return | ||
| } | ||
| b.next.Reset(time.Duration(c.updateAccountPeersBufferInterval.Load())) | ||
| }() |
There was a problem hiding this comment.
Use a detached context for buffered background updates.
The goroutine captures ctx from the caller. If that context is cancelled, both the immediate update (line 297) and the scheduled follow-up update (line 304) will fail. Since this is a background buffering mechanism, it should continue operating independently of the original request lifecycle.
Apply this diff:
go func() {
defer b.mu.Unlock()
- _ = c.UpdateAccountPeers(ctx, accountID)
+ _ = c.UpdateAccountPeers(context.Background(), accountID)
if !b.update.Load() {
return
}
b.update.Store(false)
if b.next == nil {
b.next = time.AfterFunc(time.Duration(c.updateAccountPeersBufferInterval.Load()), func() {
- _ = c.UpdateAccountPeers(ctx, accountID)
+ _ = c.UpdateAccountPeers(context.Background(), accountID)
})
return
}
b.next.Reset(time.Duration(c.updateAccountPeersBufferInterval.Load()))
}()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| go func() { | |
| defer b.mu.Unlock() | |
| _ = c.UpdateAccountPeers(ctx, accountID) | |
| if !b.update.Load() { | |
| return | |
| } | |
| b.update.Store(false) | |
| if b.next == nil { | |
| b.next = time.AfterFunc(time.Duration(c.updateAccountPeersBufferInterval.Load()), func() { | |
| _ = c.UpdateAccountPeers(ctx, accountID) | |
| }) | |
| return | |
| } | |
| b.next.Reset(time.Duration(c.updateAccountPeersBufferInterval.Load())) | |
| }() | |
| go func() { | |
| defer b.mu.Unlock() | |
| _ = c.UpdateAccountPeers(context.Background(), accountID) | |
| if !b.update.Load() { | |
| return | |
| } | |
| b.update.Store(false) | |
| if b.next == nil { | |
| b.next = time.AfterFunc(time.Duration(c.updateAccountPeersBufferInterval.Load()), func() { | |
| _ = c.UpdateAccountPeers(context.Background(), accountID) | |
| }) | |
| return | |
| } | |
| b.next.Reset(time.Duration(c.updateAccountPeersBufferInterval.Load())) | |
| }() |
🤖 Prompt for AI Agents
In management/internals/controllers/network_map/controller/controller.go around
lines 295 to 309, the background goroutine and the time.AfterFunc callback
capture the request ctx; replace that with a detached context so buffered
updates run independently of the caller's lifecycle: create a new ctx (e.g.
ctxBG := context.Background() or ctxBG, _ :=
context.WithCancel(context.Background())) inside the goroutine and use ctxBG
when calling c.UpdateAccountPeers in both the immediate call and inside the
AfterFunc; also ensure the context package is imported if not already.
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (2)
management/server/management_test.go (1)
204-208: Correct wiring of the new controller architecture.The initialization properly sets up the update manager, request buffer, and network map controller with the correct dependencies. The flow matches the new design where the controller orchestrates network map operations.
Minor improvement: The
ctxcreated on line 205 could be reused when callingBuildManageron line 211 instead of creating anothercontext.Background():ctx := context.Background() updateManager := update_channel.NewPeersUpdateManager(metrics) requestBuffer := server.NewAccountRequestBuffer(ctx, str) networkMapController := controller.NewController(ctx, str, metrics, updateManager, requestBuffer, server.MockIntegratedValidator{}, settingsMockManager, "netbird.selfhosted", port_forwarding.NewControllerMock()) accountManager, err := server.BuildManager( - context.Background(), + ctx, str, networkMapController,management/server/account_test.go (1)
1375-1377: Verify channel drain logic to prevent potential test hang.The channel drain at lines 1375-1377 will block indefinitely if
updMsgis empty. Consider using a non-blocking drain pattern to ensure the test doesn't hang if there are no pending messages:- // drain channel - <-updMsg - + // drain channel + select { + case <-updMsg: + // drained one message + default: + // no messages to drain + } +This matches the pattern used elsewhere in the same test at lines 1431-1437 where a similar drain uses a select statement with a default case.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (10)
management/internals/server/controllers.go(4 hunks)management/server/account_test.go(54 hunks)management/server/dns_test.go(4 hunks)management/server/http/testing/testing_tools/channel/channel.go(6 hunks)management/server/management_proto_test.go(3 hunks)management/server/management_test.go(3 hunks)management/server/nameserver_test.go(4 hunks)management/server/peer_test.go(30 hunks)management/server/route_test.go(10 hunks)shared/management/client/client_test.go(2 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- management/server/route_test.go
🧰 Additional context used
🧬 Code graph analysis (9)
management/internals/server/controllers.go (7)
management/internals/server/server.go (1)
BaseServer(45-68)management/internals/controllers/network_map/update_channel/updatechannel.go (2)
PeersUpdateManager(16-23)NewPeersUpdateManager(26-32)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)management/internals/server/container.go (1)
Create(6-10)management/internals/shared/grpc/token_mgr.go (3)
SecretsManager(27-32)TimeBasedAuthSecretsManager(35-46)NewTimeBasedAuthSecretsManager(50-85)management/internals/controllers/network_map/controller/controller.go (2)
Controller(37-60)NewController(70-104)management/server/account_request_buffer.go (2)
AccountRequestBuffer(27-33)NewAccountRequestBuffer(35-57)
management/server/dns_test.go (6)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(70-104)management/server/integrated_validator.go (6)
MockIntegratedValidator(125-128)MockIntegratedValidator(153-155)MockIntegratedValidator(157-159)MockIntegratedValidator(161-163)MockIntegratedValidator(165-167)MockIntegratedValidator(169-171)management/server/integrations/port_forwarding/controller.go (1)
NewControllerMock(20-22)management/server/account.go (1)
BuildManager(172-256)
shared/management/client/client_test.go (5)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(70-104)management/server/account.go (1)
BuildManager(172-256)management/internals/shared/grpc/token_mgr.go (1)
NewTimeBasedAuthSecretsManager(50-85)
management/server/http/testing/testing_tools/channel/channel.go (9)
management/server/http/testing/testing_tools/tools.go (1)
TB(64-70)management/internals/controllers/network_map/update_message.go (1)
UpdateMessage(7-9)management/server/http/handlers/peers/peers_handler.go (1)
Handler(26-29)management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(70-104)management/server/integrations/port_forwarding/controller.go (1)
NewControllerMock(20-22)management/server/account.go (1)
BuildManager(172-256)management/server/http/handler.go (1)
NewAPIHandler(54-138)
management/server/management_test.go (6)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(70-104)management/server/account.go (1)
BuildManager(172-256)management/internals/shared/grpc/token_mgr.go (1)
NewTimeBasedAuthSecretsManager(50-85)management/internals/shared/grpc/server.go (1)
NewServer(81-142)
management/server/management_proto_test.go (10)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(70-104)management/server/integrated_validator.go (6)
MockIntegratedValidator(125-128)MockIntegratedValidator(153-155)MockIntegratedValidator(157-159)MockIntegratedValidator(161-163)MockIntegratedValidator(165-167)MockIntegratedValidator(169-171)management/server/account.go (1)
BuildManager(172-256)management/internals/shared/grpc/token_mgr.go (1)
NewTimeBasedAuthSecretsManager(50-85)management/server/peers/ephemeral/manager/ephemeral.go (1)
NewEphemeralManager(52-60)management/internals/shared/grpc/server.go (1)
NewServer(81-142)management/server/peer/peer.go (3)
PeerSystemMeta(115-133)Environment(86-89)Flags(99-112)shared/management/proto/management.pb.go (9)
PeerSystemMeta(917-939)PeerSystemMeta(954-954)PeerSystemMeta(969-971)Environment(673-682)Environment(697-697)Environment(712-714)Flags(797-812)Flags(827-827)Flags(842-844)
management/server/peer_test.go (9)
management/internals/controllers/network_map/interface.go (1)
EnvNewNetworkMapBuilder(15-15)management/server/account.go (2)
DefaultAccountManager(64-105)BuildManager(172-256)management/internals/controllers/network_map/update_channel/updatechannel.go (2)
PeersUpdateManager(16-23)NewPeersUpdateManager(26-32)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)management/internals/shared/grpc/token_mgr.go (1)
Token(48-48)management/internals/controllers/network_map/controller/cache/dns_config_cache.go (1)
DNSConfigCache(10-12)management/internals/shared/grpc/conversion.go (1)
ToSyncResponse(96-139)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(70-104)
management/server/nameserver_test.go (3)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(70-104)
management/server/account_test.go (7)
management/internals/controllers/network_map/interface.go (1)
EnvNewNetworkMapBuilder(15-15)management/server/account.go (2)
DefaultAccountManager(64-105)BuildManager(172-256)management/internals/controllers/network_map/update_channel/updatechannel.go (2)
PeersUpdateManager(16-23)NewPeersUpdateManager(26-32)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(70-104)management/server/integrated_validator.go (6)
MockIntegratedValidator(125-128)MockIntegratedValidator(153-155)MockIntegratedValidator(157-159)MockIntegratedValidator(161-163)MockIntegratedValidator(165-167)MockIntegratedValidator(169-171)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (26)
- GitHub Check: Management / Unit (amd64, postgres)
- GitHub Check: Management / Integration (amd64, postgres)
- GitHub Check: Management / Integration (amd64, sqlite)
- GitHub Check: Management / Benchmark (amd64, postgres)
- GitHub Check: Management / Unit (amd64, sqlite)
- GitHub Check: Management / Benchmark (API) (amd64, postgres)
- GitHub Check: Signal / Unit (386)
- GitHub Check: Management / Benchmark (API) (amd64, sqlite)
- GitHub Check: Signal / Unit (amd64)
- GitHub Check: Management / Unit (amd64, mysql)
- GitHub Check: Relay / Unit (amd64, -race)
- GitHub Check: Management / Benchmark (amd64, sqlite)
- GitHub Check: Relay / Unit (386)
- GitHub Check: Client (Docker) / Unit
- GitHub Check: Client / Unit (amd64)
- GitHub Check: Client / Unit (386)
- GitHub Check: Client / Unit
- GitHub Check: Windows
- GitHub Check: Darwin
- GitHub Check: Linux
- GitHub Check: Client / Unit
- GitHub Check: Client / Unit
- GitHub Check: release_ui_darwin
- GitHub Check: release
- GitHub Check: JS / Lint
- GitHub Check: Android / Build
🔇 Additional comments (6)
management/server/management_test.go (3)
23-26: LGTM! Imports align with the new architecture.The new imports for
controller,update_channel, andnbgrpcare correctly added to support the refactored network map controller design.
210-227: LGTM! BuildManager correctly receives the network map controller.The updated
BuildManagercall properly passes thenetworkMapControlleras per the new signature, completing the integration with the controller-based architecture.
229-245: LGTM! Server construction follows the new nbgrpc pattern.The secrets manager and gRPC server are correctly instantiated using the
nbgrpcpackage constructors, properly wiring in theupdateManagerandnetworkMapControllerto complete the refactored architecture.shared/management/client/client_test.go (2)
21-23: LGTM! Imports align with the new controller-based architecture.The new imports correctly reference the controller, update channel, and shared gRPC packages needed for the refactored network map design.
127-128: LGTM! Server instantiation correctly wired.The secrets manager and gRPC server are properly instantiated with the new
updateManagerandnetworkMapController, aligning with the refactored architecture. Error handling is appropriately in place.management/server/peer_test.go (1)
27-32: LGTM! Test refactoring properly aligned with new architecture.The test updates consistently adopt the new controller-based architecture:
- Test setup correctly instantiates
updateManager,requestBuffer, andnetworkMapControllerBuildManagercalls updated to accept the newnetworkMapControllerparameter- Channel lifecycle properly managed with
CreateChannel/CloseChannelpairs- Type migrations to shared
grpcpackage (*grpc.Token,grpc.ToSyncResponse) are consistentsetupTestAccountManagersignature correctly updated to return theupdateManagerThe experimental feature flag usage via environment variable is appropriate for testing both code paths.
Also applies to: 178-178, 748-913, 1098-1101, 1178-1178, 1180-1180, 1290-1295, 1375-1380, 1528-1533, 1608-1613, 1746-1746, 1803-1806, 1050-1054
| return Create(s, func() *nmapcontroller.Controller { | ||
| return nmapcontroller.NewController(context.Background(), s.Store(), s.Metrics(), s.PeersUpdateManager(), s.AccountRequestBuffer(), s.IntegratedValidator(), s.SettingsManager(), s.dnsDomain, s.ProxyController()) | ||
| }) | ||
| } | ||
|
|
||
| func (s *BaseServer) AccountRequestBuffer() *server.AccountRequestBuffer { | ||
| return Create(s, func() *server.AccountRequestBuffer { | ||
| return server.NewAccountRequestBuffer(context.Background(), s.Store()) | ||
| }) |
There was a problem hiding this comment.
Stop leaking AccountRequestBuffer goroutine
NewAccountRequestBuffer spins up processGetAccountRequests and relies on the passed context to terminate the worker. By wiring it with context.Background(), the goroutine never receives a cancellation signal, so every server lifetime leaks a live worker that keeps touching the store even after shutdown. The same applies to the controller creation just above. Please pass the BaseServer lifecycle context (the one you cancel in BaseServer shutdown) or derive a cancellable ctx from it so these background tasks are torn down correctly.
🤖 Prompt for AI Agents
In management/internals/server/controllers.go around lines 72 to 80, the code
constructs Nmap Controller and AccountRequestBuffer using context.Background(),
which leaks long‑running goroutines because they never get cancelled; change
both calls to use the BaseServer lifecycle context (or a derived cancellable
context tied to BaseServer shutdown) instead of context.Background() so their
internal workers receive the cancel signal when the server is stopped.
| ctx := context.Background() | ||
| updateManager := update_channel.NewPeersUpdateManager(metrics) | ||
| requestBuffer := mgmt.NewAccountRequestBuffer(ctx, store) | ||
| networkMapController := controller.NewController(ctx, store, metrics, updateManager, requestBuffer, mgmt.MockIntegratedValidator{}, settingsMockManager, "netbird.selfhosted", port_forwarding.NewControllerMock()) | ||
| accountManager, err := mgmt.BuildManager(context.Background(), store, networkMapController, nil, "", eventStore, nil, false, ia, metrics, port_forwarding.NewControllerMock(), settingsMockManager, permissionsManagerMock, false) |
There was a problem hiding this comment.
Consider context and mock instance consistency.
Two potential consistency issues:
-
Context inconsistency: Line 116 creates
ctx, which is used forNewAccountRequestBufferandNewController(lines 118-119), but line 120 uses a freshcontext.Background()forBuildManager. If these components need to share cancellation or deadline semantics, consider using the same context instance. -
Duplicate mock instances: Lines 119 and 120 both call
port_forwarding.NewControllerMock(), creating two separate mock instances. If these should reference the same mock for consistent test behavior, consider creating it once and reusing it.
Apply this diff if the same context and mock should be shared:
ctx := context.Background()
updateManager := update_channel.NewPeersUpdateManager(metrics)
requestBuffer := mgmt.NewAccountRequestBuffer(ctx, store)
- networkMapController := controller.NewController(ctx, store, metrics, updateManager, requestBuffer, mgmt.MockIntegratedValidator{}, settingsMockManager, "netbird.selfhosted", port_forwarding.NewControllerMock())
- accountManager, err := mgmt.BuildManager(context.Background(), store, networkMapController, nil, "", eventStore, nil, false, ia, metrics, port_forwarding.NewControllerMock(), settingsMockManager, permissionsManagerMock, false)
+ proxyControllerMock := port_forwarding.NewControllerMock()
+ networkMapController := controller.NewController(ctx, store, metrics, updateManager, requestBuffer, mgmt.MockIntegratedValidator{}, settingsMockManager, "netbird.selfhosted", proxyControllerMock)
+ accountManager, err := mgmt.BuildManager(ctx, store, networkMapController, nil, "", eventStore, nil, false, ia, metrics, proxyControllerMock, settingsMockManager, permissionsManagerMock, false)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ctx := context.Background() | |
| updateManager := update_channel.NewPeersUpdateManager(metrics) | |
| requestBuffer := mgmt.NewAccountRequestBuffer(ctx, store) | |
| networkMapController := controller.NewController(ctx, store, metrics, updateManager, requestBuffer, mgmt.MockIntegratedValidator{}, settingsMockManager, "netbird.selfhosted", port_forwarding.NewControllerMock()) | |
| accountManager, err := mgmt.BuildManager(context.Background(), store, networkMapController, nil, "", eventStore, nil, false, ia, metrics, port_forwarding.NewControllerMock(), settingsMockManager, permissionsManagerMock, false) | |
| ctx := context.Background() | |
| updateManager := update_channel.NewPeersUpdateManager(metrics) | |
| requestBuffer := mgmt.NewAccountRequestBuffer(ctx, store) | |
| proxyControllerMock := port_forwarding.NewControllerMock() | |
| networkMapController := controller.NewController(ctx, store, metrics, updateManager, requestBuffer, mgmt.MockIntegratedValidator{}, settingsMockManager, "netbird.selfhosted", proxyControllerMock) | |
| accountManager, err := mgmt.BuildManager(ctx, store, networkMapController, nil, "", eventStore, nil, false, ia, metrics, proxyControllerMock, settingsMockManager, permissionsManagerMock, false) |
🤖 Prompt for AI Agents
In shared/management/client/client_test.go around lines 116 to 120, the test
creates a local ctx and passes it to NewAccountRequestBuffer and NewController
but then calls BuildManager with a fresh context.Background(), and it also
constructs port_forwarding.NewControllerMock() twice; to fix, reuse the same ctx
variable when calling BuildManager (replace the second context.Background() with
ctx) and instantiate port_forwarding.NewControllerMock() once into a local
variable and pass that same mock to both NewController and BuildManager so both
components share the same mock instance.
There was a problem hiding this comment.
Actionable comments posted: 0
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
client/cmd/testutil_test.go (1)
102-102: Handle the error from NewIntegratedValidator.The error returned by
integrations.NewIntegratedValidatoris ignored. If initialization fails, passing an invalidivtoBuildManageron line 120 could cause unexpected test failures.Apply this diff:
- iv, _ := integrations.NewIntegratedValidator(context.Background(), peersmanager, settingsManagerMock, eventStore) + iv, err := integrations.NewIntegratedValidator(context.Background(), peersmanager, settingsManagerMock, eventStore) + if err != nil { + t.Fatal(err) + }
🧹 Nitpick comments (3)
client/server/server_test.go (1)
306-325: Consider consolidating mocks and aligning validator usage across components.Two concerns with the test setup:
Duplicate settings manager mocks: Line 306 creates
settingsManagerMock(used only in line 308), while line 313 createssettingsMockManager(used everywhere else). These are separate mock instances, which is redundant.Inconsistent integrated validator usage:
- Line 318:
networkMapControllerreceivesserver.MockIntegratedValidator{}(mock, value)- Line 319:
BuildManagerreceivesia(real validator from line 308)- Line 325:
NewServerreceives&server.MockIntegratedValidator{}(mock, pointer)This inconsistency could lead to unpredictable test behavior, as different components will have different validation logic. For a cohesive test, consider using either the real validator (
ia) or a single mock instance consistently across all three components.Apply this diff to consolidate the mocks:
- settingsManagerMock := settings.NewMockManager(ctrl) + settingsMockManager := settings.NewMockManager(ctrl) - ia, _ := integrations.NewIntegratedValidator(context.Background(), peersManager, settingsManagerMock, eventStore) + ia, _ := integrations.NewIntegratedValidator(context.Background(), peersManager, settingsMockManager, eventStore) metrics, err := telemetry.NewDefaultAppMetrics(context.Background()) require.NoError(t, err) - settingsMockManager := settings.NewMockManager(ctrl) groupsManager := groups.NewManagerMock() requestBuffer := server.NewAccountRequestBuffer(context.Background(), store) peersUpdateManager := update_channel.NewPeersUpdateManager(metrics) - networkMapController := controller.NewController(context.Background(), store, metrics, peersUpdateManager, requestBuffer, server.MockIntegratedValidator{}, settingsMockManager, "netbird.selfhosted", port_forwarding.NewControllerMock()) + networkMapController := controller.NewController(context.Background(), store, metrics, peersUpdateManager, requestBuffer, ia, settingsMockManager, "netbird.selfhosted", port_forwarding.NewControllerMock()) accountManager, err := server.BuildManager(context.Background(), store, networkMapController, nil, "", eventStore, nil, false, ia, metrics, port_forwarding.NewControllerMock(), settingsMockManager, permissionsManagerMock, false) if err != nil { return nil, "", err } secretsManager := nbgrpc.NewTimeBasedAuthSecretsManager(peersUpdateManager, config.TURNConfig, config.Relay, settingsMockManager, groupsManager) - mgmtServer, err := nbgrpc.NewServer(config, accountManager, settingsMockManager, peersUpdateManager, secretsManager, nil, &manager.EphemeralManager{}, nil, &server.MockIntegratedValidator{}, networkMapController) + mgmtServer, err := nbgrpc.NewServer(config, accountManager, settingsMockManager, peersUpdateManager, secretsManager, nil, &manager.EphemeralManager{}, nil, ia, networkMapController)client/cmd/testutil_test.go (2)
115-119: Consider test configurability and validator consistency.The new component wiring is structurally correct, but there are a couple of points to consider:
Hardcoded domain: The
dnsDomainparameter is hardcoded to"netbird.cloud"on line 118. If tests need to verify domain-specific behavior, consider extracting this to a constant or making it configurable.Inconsistent validator usage: Line 118 creates a zero-value
mgmt.MockIntegratedValidator{}, while line 120 passes the realivvalidator toBuildManager, and line 126 creates another mock. If the mock'sValidatePeerFuncis invoked during test execution, it will panic since it's nil. Consider whether you should reuseivconsistently or ensure the mock is properly initialized if validation is triggered.
120-126: Successfully integrates the new controller-based architecture.The calls to
BuildManager,NewTimeBasedAuthSecretsManager, andNewServercorrectly reflect the updated API signatures with the newnetworkMapControllerandupdateManagerdependencies. The test setup properly wires the new components together.Minor optimization: Line 120 could reuse the
ctxfrom line 115 instead of creating a newcontext.Background(), though this doesn't affect correctness.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
client/cmd/testutil_test.go(2 hunks)client/internal/engine_test.go(2 hunks)client/server/server_test.go(2 hunks)management/server/account_test.go(54 hunks)
🧰 Additional context used
🧬 Code graph analysis (4)
client/internal/engine_test.go (5)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(70-104)management/server/account.go (1)
BuildManager(172-256)management/internals/shared/grpc/token_mgr.go (1)
NewTimeBasedAuthSecretsManager(50-85)
client/server/server_test.go (6)
management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/internals/controllers/network_map/controller/controller.go (1)
NewController(70-104)management/server/account.go (1)
BuildManager(172-256)management/internals/shared/grpc/token_mgr.go (1)
NewTimeBasedAuthSecretsManager(50-85)management/internals/shared/grpc/server.go (1)
NewServer(81-142)
management/server/account_test.go (5)
management/internals/controllers/network_map/interface.go (1)
EnvNewNetworkMapBuilder(15-15)management/internals/controllers/network_map/update_channel/updatechannel.go (2)
PeersUpdateManager(16-23)NewPeersUpdateManager(26-32)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(70-104)
client/cmd/testutil_test.go (7)
management/internals/controllers/network_map/update_channel/updatechannel.go (1)
NewPeersUpdateManager(26-32)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(70-104)management/server/integrated_validator.go (6)
MockIntegratedValidator(125-128)MockIntegratedValidator(153-155)MockIntegratedValidator(157-159)MockIntegratedValidator(161-163)MockIntegratedValidator(165-167)MockIntegratedValidator(169-171)management/server/account.go (1)
BuildManager(172-256)management/internals/shared/grpc/token_mgr.go (1)
NewTimeBasedAuthSecretsManager(50-85)management/internals/shared/grpc/server.go (1)
NewServer(81-142)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (27)
- GitHub Check: Management / Integration (amd64, sqlite)
- GitHub Check: Relay / Unit (amd64, -race)
- GitHub Check: Management / Benchmark (API) (amd64, sqlite)
- GitHub Check: Management / Benchmark (amd64, postgres)
- GitHub Check: Management / Unit (amd64, sqlite)
- GitHub Check: Management / Benchmark (amd64, sqlite)
- GitHub Check: Management / Integration (amd64, postgres)
- GitHub Check: Management / Unit (amd64, mysql)
- GitHub Check: Management / Unit (amd64, postgres)
- GitHub Check: Management / Benchmark (API) (amd64, postgres)
- GitHub Check: Signal / Unit (amd64)
- GitHub Check: Relay / Unit (386)
- GitHub Check: Client (Docker) / Unit
- GitHub Check: Client / Unit (amd64)
- GitHub Check: Client / Unit (386)
- GitHub Check: Client / Unit
- GitHub Check: Client / Unit
- GitHub Check: Windows
- GitHub Check: Darwin
- GitHub Check: release_ui_darwin
- GitHub Check: Linux
- GitHub Check: release
- GitHub Check: release_ui
- GitHub Check: Client / Unit
- GitHub Check: Android / Build
- GitHub Check: JS / Lint
- GitHub Check: iOS / Build
🔇 Additional comments (10)
client/server/server_test.go (1)
17-19: LGTM! Clean import additions for the new network map controller architecture.The imports correctly wire in the new controller, update channel manager, and shared gRPC package to support the refactored design.
client/internal/engine_test.go (2)
29-31: LGTM! New imports support the controller-based architecture.The new imports properly support the refactored network map controller design. The
nbgrpcalias is clear and avoids conflicts.
1589-1598: LGTM! Test wiring properly reflects the new controller-based architecture.The refactoring correctly instantiates the new components in the proper order:
- Creates
updateManagerandrequestBufferas dependencies- Wires them into the
networkMapController- Passes the controller to
BuildManagerandNewServerAll constructor signatures match their implementations, and error handling is appropriate.
client/cmd/testutil_test.go (1)
15-17: LGTM!The new imports correctly support the controller-based architecture refactoring. The
nbgrpcalias appropriately avoids collision with the standard gRPC import.management/server/account_test.go (6)
25-27: LGTM - imports aligned with new architecture.The new imports properly support the refactored network map controller design.
2930-2970: LGTM - test helper properly updated.The
createManagersignature correctly returns the newPeersUpdateManager, and the implementation properly initializes all required components (updateManager, requestBuffer, controller) before passing them toBuildManager. All error paths are consistent with the new signature.
2998-3043: LGTM - test setup helper correctly propagates updateManager.The signature change properly exposes the
updateManagerto test cases that need to manage peer update channels.
412-412: LGTM - call sites correctly updated.Tests appropriately capture or discard the
updateManagerreturn value based on their needs. Error handling is preserved across all call sites.Also applies to: 609-610, 711-714, 1170-1170, 1230-1230
3045-3053: LGTM - helper functions correctly updated.The type references to
network_map.UpdateMessageare consistent with the refactored package structure.Also applies to: 3055-3066
1375-1378: Investigate and document the behavioral difference in network map updates for DeletePeer.The conditional drain reveals that the new network map builder sends an additional update message during
SavePolicythat the old implementation doesn't. This is undocumented and isolated to the DeletePeer test.Actions:
- Clarify if the extra message is intentional or a bug in the new implementation
- If intentional, document why and consider separate test expectations for each implementation
- If unintended, align the implementations to send the same number of updates
- Remove the conditional workaround once behavior is consistent or properly documented
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
management/server/account_test.go (1)
3100-3102: Consider cleanup of update channels in benchmarks.Update channels are created in benchmark loops without explicit cleanup. While this may be intentional to avoid affecting benchmark timing, it could lead to resource accumulation, especially if benchmarks are interrupted or if
b.Nis large.If resource cleanup is needed, consider using
b.Cleanup:for peerID := range account.Peers { - updateManager.CreateChannel(ctx, peerID) + updateManager.CreateChannel(ctx, peerID) + b.Cleanup(func() { + updateManager.CloseChannel(ctx, peerID) + }) }Note: This may affect benchmark measurements, so evaluate whether cleanup is necessary for your testing environment.
Also applies to: 3162-3164, 3231-3233
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
management/internals/controllers/network_map/update_channel/updatechannel.go(5 hunks)management/server/account_test.go(53 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
management/internals/controllers/network_map/update_channel/updatechannel.go (2)
management/internals/controllers/network_map/update_message.go (1)
UpdateMessage(7-9)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)
management/server/account_test.go (6)
management/internals/controllers/network_map/interface.go (1)
EnvNewNetworkMapBuilder(15-15)management/server/account.go (2)
DefaultAccountManager(64-105)BuildManager(172-256)management/internals/controllers/network_map/update_channel/updatechannel.go (2)
PeersUpdateManager(16-23)NewPeersUpdateManager(28-34)management/internals/controllers/network_map/update_channel.go (1)
PeersUpdateManager(5-13)management/server/account_request_buffer.go (1)
NewAccountRequestBuffer(35-57)management/internals/controllers/network_map/controller/controller.go (1)
NewController(70-104)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (23)
- GitHub Check: Management / Integration (amd64, postgres)
- GitHub Check: Management / Benchmark (API) (amd64, sqlite)
- GitHub Check: Management / Unit (amd64, postgres)
- GitHub Check: Management / Unit (amd64, sqlite)
- GitHub Check: Management / Benchmark (API) (amd64, postgres)
- GitHub Check: Management / Benchmark (amd64, postgres)
- GitHub Check: Management / Benchmark (amd64, sqlite)
- GitHub Check: Management / Unit (amd64, mysql)
- GitHub Check: Relay / Unit (386)
- GitHub Check: Relay / Unit (amd64, -race)
- GitHub Check: Client / Unit (386)
- GitHub Check: Client (Docker) / Unit
- GitHub Check: Client / Unit (amd64)
- GitHub Check: release_ui
- GitHub Check: release
- GitHub Check: JS / Lint
- GitHub Check: Android / Build
- GitHub Check: Client / Unit
- GitHub Check: Client / Unit
- GitHub Check: Linux
- GitHub Check: Client / Unit
- GitHub Check: Darwin
- GitHub Check: Windows
🔇 Additional comments (5)
management/server/account_test.go (2)
25-27: LGTM! New imports support the network map refactoring.The addition of network_map, controller, and update_channel imports aligns with the PR's objective to move network map logic into a new design.
2928-2968: LGTM! Test helper signatures updated correctly.The
createManagerandsetupNetworkMapTestfunctions have been properly updated to support the new network map architecture:
createManagernow creates and returns thePeersUpdateManagerand properly wires it through the controller and manager initializationsetupNetworkMapTestnow returns theupdateManagerfor use in tests- All call sites throughout the file have been updated to capture the new return values
- Error handling remains consistent
Also applies to: 2996-3041
management/internals/controllers/network_map/update_channel/updatechannel.go (3)
176-180: CountStreams race condition properly fixed.The previously flagged critical issue has been addressed correctly. The read lock now properly guards the map access during the length measurement.
37-62: Well-implemented concurrent update distribution.The synchronization patterns are correct throughout:
- Read lock for checking channel existence
- Non-blocking send prevents blocking on full/slow consumers
- Proper telemetry and logging for dropped updates
This design choice to drop updates rather than block is reasonable for network map distribution where newer updates supersede older ones.
78-82: No issue found—readers already handle channel closure correctly.The production reader in
handleUpdatesalready properly detects channel closure using the two-value receive form:case update, open := <-updates: if !open { // handles closure }At
management/internals/shared/grpc/server.go:296-301, the code receives the update with the ok flag and checksif !opento detect when the channel is closed. This is the correct and safe pattern.The concern raised in the original review comment is already addressed in the codebase. Readers are properly detecting and handling channel closure.
Likely an incorrect or invalid review comment.
| // We need to sleep to wait for the buffer peer update | ||
| time.Sleep(300 * time.Millisecond) | ||
|
|
There was a problem hiding this comment.
Consider replacing hardcoded sleep with deterministic synchronization.
Using time.Sleep to wait for buffered peer updates makes the test fragile and timing-dependent. On slower systems, 300ms may be insufficient, causing flaky failures. On faster systems, it unnecessarily slows down the test suite.
Consider one of these alternatives:
- Use a synchronization channel from the update buffer to signal when updates are flushed
- Expose a flush/drain method on the request buffer for testing
- Poll with timeout to check if the update has been processed
Example polling approach:
- // We need to sleep to wait for the buffer peer update
- time.Sleep(300 * time.Millisecond)
-
+ // Wait for buffer to process the peer update
+ require.Eventually(t, func() bool {
+ // Check if the update has been processed by attempting to create the channel
+ // and verifying the initial state
+ return true // replace with actual condition
+ }, 2*time.Second, 50*time.Millisecond, "timed out waiting for buffered update")Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In management/server/account_test.go around lines 1360-1362, replace the brittle
time.Sleep(300 * time.Millisecond) used to wait for buffered peer updates with
deterministic synchronization: either have the buffer send a done signal on a
channel when it flushes and wait on that channel in the test, add an exported
Flush/Drain method on the request buffer and call it from the test to block
until processing is complete, or implement a short poll-with-timeout that
repeatedly checks a visible state (e.g., buffer empty flag or peer update
applied) until success or timeout; update the test to use one of these patterns
to avoid fixed sleeps and flakiness.
Describe your changes
Issue ticket number and link
Stack
Checklist
Documentation
Select exactly one:
Docs PR URL (required if "docs added" is checked)
Paste the PR link from https://github.com/netbirdio/docs here:
https://github.com/netbirdio/docs/pull/__
Summary by CodeRabbit
New Features
Improvements
Tests