Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CPU usage of opensnitch-ui always over 6% #801

Closed
pabloab opened this issue Jan 17, 2023 · 21 comments
Closed

CPU usage of opensnitch-ui always over 6% #801

pabloab opened this issue Jan 17, 2023 · 21 comments
Labels
3rd party Error related to a third-party lib/app/...

Comments

@pabloab
Copy link

pabloab commented Jan 17, 2023

I'm not sure if it's normal, but opensnitch-ui is almost always (idle, without showing the statistics) one of the TOP10 process with higher CPU usage:

$ pidstat 2 10 -p $(pgrep opensnitch-ui)
Linux 5.4.0-137-generic                 01/17/2023      _x86_64_        (4 CPU)
 PID    %usr %system  %guest   %wait    %CPU   CPU  Command
3587    5.00    2.00    0.00    0.00    7.00     1  opensnitch-ui
3587    5.50    1.50    0.00    0.00    7.00     1  opensnitch-ui
3587    3.00    3.50    0.00    0.00    6.50     1  opensnitch-ui
3587    3.50    1.50    0.00    0.00    5.00     2  opensnitch-ui
3587    4.00    2.00    0.00    0.00    6.00     2  opensnitch-ui
3587    4.50    2.00    0.00    0.00    6.50     0  opensnitch-ui
3587    4.00    2.50    0.00    0.00    6.50     2  opensnitch-ui
3587    5.00    1.50    0.00    0.00    6.50     0  opensnitch-ui
3587    2.00    4.00    0.00    0.00    6.00     1  opensnitch-ui
3587    4.50    2.50    0.00    0.00    7.00     1  opensnitch-ui
3587    4.10    2.30    0.00    0.00    6.40     -  opensnitch-ui

On the other hand, opensnitchd average CPU usage is around 1%, which seems more reasonable.

opensnitchd 1.5.1. Ubuntu 20.04. GNOME Shell 3.36.9, $XDG_SESSION_TYPE x11, kernel 5.4.0-137-generic.

@gustavo-iniguez-goya
Copy link
Collaborator

Hi @pabloab ,

What python3-grpcio version have you installed? -> $ pip3 list installed | grep grpcio or $ apt show python3-grpcio

I've recently realized that there's a bug between > 1.44 and < 1.48 versions that causes exactly this behaviour: #647 (comment)

@pabloab
Copy link
Author

pabloab commented Jan 17, 2023

$ apt show python3-grpcio
Package: python3-grpcio
Version: 1.16.1-1ubuntu5
...
Installed-Size: 2,620 kB
Provides: python3.8-grpcio
Depends: python3 (<< 3.9), python3 (>= 3.8~), python3-six (>= 1.5.2), python3:any, libc-ares2 (>= 1.11.0~rc1), libc6 (>= 2.29), libssl1.1 (>= 1.1.0), libstdc++6 (>= 4.1.1), zlib1g (>= 1:1.1.4)
Python-Version:
 3.8
APT-Manual-Installed: yes
APT-Sources: http://archive.ubuntu.com/ubuntu focal/universe amd64 Packages

@gustavo-iniguez-goya
Copy link
Collaborator

thank you Pablo,

I only can reproduce this issue while the GUI is the window which has the focus, refreshing connections.

If that is what you refer to, then it's the normal behaviour.

@pabloab
Copy link
Author

pabloab commented Jan 17, 2023

No, this happens all the time, even with the GUI closed. With the GUI open the average CPU usage (acording to pidstat) jumps to 8.30.

Please tell me if something of these info helps, or if there's any more I can do.

$ sudo strace -p $(pgrep opensnitch-ui) -c
strace: Process 107764 attached
^Cstrace: Process 107764 detached
% time     seconds  usecs/call     calls    errors syscall
------ ----------- ----------- --------- --------- ----------------
 97.92    1.681785       18083        93           poll
  1.14    0.019611       19611         1           restart_syscall
  0.72    0.012381          30       407        46 futex
  0.12    0.002120          24        88           read
  0.08    0.001400           6       209       202 recvmsg
  0.01    0.000140          23         6           munmap
  0.00    0.000025           2        12           lseek
  0.00    0.000025           4         6           openat
  0.00    0.000023           3         6           close
  0.00    0.000021           3         6           mmap
  0.00    0.000010           1         6           fstat
------ ----------- ----------- --------- --------- ----------------
100.00    1.717541                   840       248 total
$ strace -p $(pgrep opensnitch-ui) |& tee opensnitch-ui-strace
$ sort opensnitch-ui-strace | uniq -c | sort -rn | head
     82 recvmsg(6, {msg_namelen=0}, 0)          = -1 EAGAIN (Resource temporarily unavailable)
     63 futex(0x93eb50, FUTEX_WAKE_PRIVATE, 1)  = 1
     40 read(5, "\1\0\0\0\0\0\0\0", 16)         = 8
     38 poll([{fd=5, events=POLLIN}, {fd=6, events=POLLIN}, {fd=7, events=POLLIN}, {fd=12, events=POLLIN}, {fd=15, events=POLLIN}], 5, -1) = 1 ([{fd=5, revents=POLLIN}])
     37 futex(0x93eb48, FUTEX_WAKE_PRIVATE, 1)  = 1

Maybe a blocking read returns EAGAIN? 60% in user mode, 40% in kernel mode.

@gustavo-iniguez-goya
Copy link
Collaborator

it smells like a problem related to the gRPClib.

ok, open the python script /usr/bin/opensnitch-ui and comment this line out:

logging.getLogger().disabled = True

Then close the GUI and launch it from a terminal as follow:

~ $ GRPC_TRACE="all" GRPC_VERBOSITY="debug" opensnitch-ui

There should be ton of messages. But if it's related to the gRPC lib there should be repetitive messages, like continuously writing:

I0117 18:31:28.825809941  167312 completion_queue.cc:962]    grpc_completion_queue_next(cq=0x55665c2d9480, deadline=gpr_timespec { tv_sec: 1673976689, tv_nsec: 25803849, clock_type: 1 }, reserved=(nil))

On the other hand, are you saving events to disk or only in memory?

Unfortnunately right now there's no way to debug these problems without adding print()'s here and there.
Anyway, I'll blame gRPC lib for now.

Take a look also to the daemon log /var/log/opensnitchd.log, just in case it's writing any errors.

@pabloab
Copy link
Author

pabloab commented Jan 17, 2023

Running it that way (and with the line commented) the most common lines look like this:

completion_queue.cc:1082]   RETURN_EVENT[0x253f090]: QUEUE_TIMEOUT
completion_queue.cc:969]    grpc_completion_queue_next(cq=0x253f090, deadline=gpr_timespec { tv_sec: 1673977635, tv_nsec: 610295876, clock_type: 1 

On the other hand, are you saving events to disk or only in memory?

The Database is in memory. Or do you mean something else?

/var/log/opensnitchd.log doesn't seem to show much, with ERR/WARN just some of

WAR  Error while pinging UI service: rpc error: code = DeadlineExceeded desc = context deadline exceeded, state: READY
ERR  Connection to the UI service lost.
ERR  getting notifications: rpc error: code = Unavailable desc = transport is closing <nil>

@gustavo-iniguez-goya
Copy link
Collaborator

The Database is in memory.

👍 it was only to discard that it wasn't writing too much events to disk, which is generally slower.

Try installing grpcio 1.44.0 only for your user: $ pip3 install --user --ignore-installed grpcio==1.44.0

It'll install the lib under your home: /home/USER/.local/lib/ , so it shouldn't interfere with the rest of your system.

Close and relaunch the GUI after that and see if it makes any difference.

@pabloab
Copy link
Author

pabloab commented Jan 17, 2023

$ pip3 install --user --ignore-installed grpcio==1.44.0
$ pkill opensnitch-ui
$ opensnitch-ui &
$ pidstat 2 30 -p $(pgrep opensnitch-ui)
06:17:27 PM   UID       PID    %usr %system  %guest   %wait    %CPU   CPU  Command
Average:     1000    626642    3.63    2.52    0.00    0.07    6.15     -  opensnitch-ui

Disabling it using the context menu on the topbar icon result in barely the same CPU usage.

@gustavo-iniguez-goya
Copy link
Collaborator

That's likely caused by gRPC, it's what I 've seen other times. What happens if you stop the daemon?
Even if you disable the interception, the daemon remains connected to the GUI, the GUI acts just as a server. But there shouldn't be any activity.

So if you launch the GUI without the daemon running, does it consume any % of the CPU? (it shouldn't).

Anyway, try debugging it with perf: perf top -g -p $(pgrep opensnitch-ui)
Maybe you'll get a better idea of what it's doing.

@pabloab
Copy link
Author

pabloab commented Jan 22, 2023

Stopping the service change things, but not much. On average (30 samples, with pidstat) with the service stopped 5.97 of CPU usage (with service running ~6.48).

Running perf as suggested for a while (with or without the service running returns similar values):

Samples: 17K of event 'cycles', 4000 Hz, Event count (approx.): 580852425 lost: 0/0 drop: 0/0
  Children      Self  Shared Object                            Symbol
+   50.46%    10.27%  [kernel]                                 [k] entry_SYSCALL_64_after_hwframe
+   39.49%     7.63%  [kernel]                                 [k] do_syscall_64
+   36.20%     0.78%  libc-2.31.so                             [.] getifaddrs_internal
+   33.48%     0.72%  libc-2.31.so                             [.] __res_nclose
+   25.55%     0.11%  [kernel]                                 [k] __x64_sys_epoll_wait
+   25.30%     0.19%  [kernel]                                 [k] do_epoll_wait
+   23.07%     0.56%  [kernel]                                 [k] ep_poll
+   21.45%     0.09%  [kernel]                                 [k] schedule_hrtimeout_range
+   21.25%     0.20%  [kernel]                                 [k] schedule_hrtimeout_range_clock
+   18.01%     0.53%  [kernel]                                 [k] __schedule
+   17.51%     0.22%  [kernel]                                 [k] schedule
+   14.37%    13.65%  [kernel]                                 [k] syscall_return_via_sysret
+   10.52%     9.20%  cygrpc.cpython-38-x86_64-linux-gnu.so    [.] std::_Rb_tree<long, long, std::_Identity<long>, std::less<long>, std::allocator<long> >::_M_insert_unique<long const&>

BTW, running opensnitch-ui from terminal:

$ opensnitch-ui
Themes not available. Install qt-material if you want to change GUI's appearance: pip3 install qt-material.
Loading translations: /usr/lib/python3/dist-packages/opensnitch/i18n locale: en_US
exception loading ipasn db: No module named 'pyasn'
Install python3-pyasn to display IP's network name.
new node connected, listening for client responses... /tmp/osui.sock

@gustavo-iniguez-goya gustavo-iniguez-goya added the 3rd party Error related to a third-party lib/app/... label Jan 22, 2023
@gustavo-iniguez-goya
Copy link
Collaborator

Taken all the info into account, this issue doesn't seem related to the GUI, but probably to python3-grpcio. It's strange because grpcio version 1.16.1 has been tested a lot, I've never experienced this behaviour.

@gustavo-iniguez-goya gustavo-iniguez-goya closed this as not planned Won't fix, can't repro, duplicate, stale Feb 15, 2023
@bughunter2
Copy link

I'm also experiencing some CPU usage. Between 4-6% constantly, both for the opensnitchd service and the UI process, each individually between 4-6%, even when all network interfaces are down, even if the UI is closed.

This seems a bit wasteful if we consider laptops and battery life.

Can this be improved? Should I file a bug or feature request?

Info:

$ pip3 list installed | grep grpcio
grpcio              1.48.4

@gustavo-iniguez-goya
Copy link
Collaborator

hey @bughunter2 , did you take a look at the logs? Change LogLevel to DEBUG from the GUI (or LogLevel to 0 in the default-config.json), and see if there's any error or if there's any repetitive connection / event taking place.

Also verify that opensnitch is using 'ebpf' as the proc monitor method.
By the way, what version are you using?

Anyway, possible errors aside, it'll depend a lot on the use case. For example Transmission will cause opensnitch to use more CPU than other apps.

@bughunter2
Copy link

Hi @gustavo-iniguez-goya, thanks for your response! Yes, ebpf is the proc monitor method.

Also, a new discovery: the opensnitch-ui process uses between 3% and 6% constantly, even if the opensnitch daemon/service is NOT running at all. In other words, the opensnitch-ui CPU usage is similar to when opensnitchd does run. And if opensnitchd is running, it also starts using anywhere between 3% and 6% constantly (so, combined, that's between 6% and 12% of CPU time constantly being used by opensnitch).

Some excerpts from the log, after I changed the LogLevel to 0 in the config file:

[2024-04-26 20:43:15]  INF  Process monitor method ebpf
...
[2024-04-26 20:43:15]  INF  Using nftables firewall
...
[2024-04-26 20:43:15]  INF  Running on netfilter queue #0 ...
...
[2024-04-26 20:43:15]  INF  [eBPF] module loaded: /usr/lib/opensnitchd/ebpf/opensnitch-dns.o
...
[2024-04-26 20:43:16]  INF  Connected to the UI service on ///tmp/osui.sock
...

Note that, during the constant 4% to 6% CPU usage (from both opensnitch-ui and opensnitchd individually), I usually just see this in the log when the system is mostly idle:

[2024-04-26 20:43:44]  DBG  [eBPF exit event] -> 13412
[2024-04-26 20:43:45]  DBG  [ebpf] tcp map: 0 active items
[2024-04-26 20:43:45]  DBG  [ebpf] tcp6 map: 0 active items
[2024-04-26 20:43:45]  DBG  [ebpf] udp map: 0 active items
[2024-04-26 20:43:45]  DBG  [ebpf] udp6 map: 0 active items
[2024-04-26 20:43:46]  DBG  [eBPF exit event] -> 15698
[2024-04-26 20:43:46]  DBG  [eBPF exit event] -> 15698
[2024-04-26 20:43:46]  DBG  [eBPF exit event] -> 15698
[2024-04-26 20:43:46]  DBG  [eBPF exit event] -> 15698
[2024-04-26 20:43:49]  DBG  [eBPF exit event] -> 3164
[2024-04-26 20:43:49]  DBG  [eBPF exit event] -> 3164
[2024-04-26 20:43:50]  DBG  [ebpf] tcp map: 0 active items
[2024-04-26 20:43:50]  DBG  [ebpf] tcp6 map: 0 active items
[2024-04-26 20:43:50]  DBG  [ebpf] udp map: 0 active items
[2024-04-26 20:43:50]  DBG  [ebpf] udp6 map: 0 active items
[2024-04-26 20:43:55]  DBG  [ebpf] tcp map: 0 active items
[2024-04-26 20:43:55]  DBG  [ebpf] tcp6 map: 0 active items
[2024-04-26 20:43:55]  DBG  [ebpf] udp map: 0 active items
[2024-04-26 20:43:55]  DBG  [ebpf] udp6 map: 0 active items
[2024-04-26 20:43:55]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16213
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 4702
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 4591
[2024-04-26 20:43:56]  DBG  [eBPF exit event inCache] -> 4591
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 4591
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 15060
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 15060
[2024-04-26 20:43:56]  DBG  [eBPF exit event] -> 16684
[2024-04-26 20:43:58]  DBG  [eBPF exit event] -> 4591
[2024-04-26 20:44:00]  DBG  [ebpf] tcp map: 0 active items
[2024-04-26 20:44:00]  DBG  [ebpf] tcp6 map: 0 active items
[2024-04-26 20:44:00]  DBG  [ebpf] udp map: 0 active items
[2024-04-26 20:44:00]  DBG  [ebpf] udp6 map: 0 active items
[2024-04-26 20:44:05]  DBG  [ebpf] tcp map: 0 active items
[2024-04-26 20:44:05]  DBG  [ebpf] tcp6 map: 0 active items
[2024-04-26 20:44:05]  DBG  [ebpf] udp map: 0 active items
[2024-04-26 20:44:05]  DBG  [ebpf] udp6 map: 0 active items
[2024-04-26 20:44:07]  DBG  [eBPF exit event] -> 15060
[2024-04-26 20:44:08]  DBG  [eBPF exit event] -> 15060
[2024-04-26 20:44:08]  DBG  [eBPF exit event] -> 4591
[2024-04-26 20:44:08]  DBG  [eBPF exit event] -> 4591
[2024-04-26 20:44:08]  DBG  [eBPF exit event] -> 4591
[2024-04-26 20:44:08]  DBG  [eBPF exit event] -> 4774
[2024-04-26 20:44:10]  DBG  [ebpf] tcp6 map: 0 active items
[2024-04-26 20:44:10]  DBG  [ebpf] udp map: 0 active items
[2024-04-26 20:44:10]  DBG  [ebpf] udp6 map: 0 active items
[2024-04-26 20:44:10]  DBG  [ebpf] tcp map: 0 active items
[2024-04-26 20:44:15]  DBG  [ebpf] tcp map: 0 active items
[2024-04-26 20:44:15]  DBG  [ebpf] tcp6 map: 0 active items
[2024-04-26 20:44:15]  DBG  [ebpf] udp map: 0 active items
[2024-04-26 20:44:15]  DBG  [ebpf] udp6 map: 0 active items
[2024-04-26 20:44:16]  DBG  [eBPF exit event] -> 16312
[2024-04-26 20:44:16]  DBG  [eBPF exit event] -> 16313

@gustavo-iniguez-goya
Copy link
Collaborator

gustavo-iniguez-goya commented Apr 27, 2024

Thank you !

I've been reviewing again the issue grpc/grpc#30428 , and apparently this issue does not occur on grpcio 1.44.0 (#647), and seems to be fixed on v1.59 (grpc/grpc#30428 (comment))

(I've tested it on Ubuntu 24.04 with python3-grpcio 1.51.1 and doesn't suffer from this problem)

So I'd suggest to install grpcio 1.44.0 for your user as described in the above issue, or 1.59.0, and see if it solves the problem.

Interestingly, the version which is installed on your system (1.48.4) does not exist in the release history of grpcio: https://pypi.org/project/grpcio/#history

If any of those versions solve the problem on your system, maybe we could have a list of known grpcio buggy versions.

--

By the way, those repetitive exit events were fixed on this commit: 15fcf67
The module opensnitch-procs.o was compiled here: https://github.com/evilsocket/opensnitch/actions/runs/8048390381
just in case you want to give it a try (I don't think it'll improve much the CPU usage..).

@bughunter2
Copy link

bughunter2 commented Apr 27, 2024

Thanks for the good info. 🙂

I tried grpcio 1.62.2 and used the new ebpf modules you provided.
The modules alone didn't seem to help much (as you indeed expected).

Using grpcio 1.62.2 gave mixed results:
While the CPU usage of the opensnitch-ui improved and dropped to between 0.3% and 1.0%, the CPU usage of opensnitchd dropped only slightly, ranging from 2.3% to 5.0% and mostly showing 3% to 4.7%. Measured with 'top'.

To be sure, I removed the grpcio 1.48.4 module from my system using rpm -e --nodeps python3-grpcio and installed the 1.62.2 one via sudo pip install grpcio, then restarted opensnitch (both UI and daemon) to make sure they used the new library.

EDIT: Maybe also useful to know: even if opensnitch-ui is not running, opensnitchd still consumes between 2.3% and 5.0% of the CPU's time.

@gustavo-iniguez-goya
Copy link
Collaborator

Good news then! so on some systems using python grpcio > 1.44 or < 1.48 causes some performance problems.

Now, regarding the CPU usage of opensnitchd: launch top on a terminal (top -p $(pgrep opensnitchd)) and tail -f /var/log/opensnitchd.log on another one, and see if you can correlate the activity in the log with the CPU spikes.

On my systems it's general between 0,7% and 1%, mainly because of a couple of background threads.

It's true that I've had on the radar lately the "excessive" usage of some syscalls, which is costly in general, but it's specially more problematic under heavy load.

+   77,21%     0,12%  [kernel]          [k] do_syscall_64   
+   71,09%     0,03%  opensnitchd-new   [.] runtime/internal/syscall.Syscall6
+   >>>> 70,30%     68,81%  [kernel]          [k] inet_diag_dump_icsk
+   62,44%     0,12%  [kernel]          [k] entry_SYSCALL_64_after_hwframe
+   53,29%     0,12%  opensnitchd-new   [.] github.com/evilsocket/opensnitch/daemon/netlink.netlinkRequest
+   13,40%     0,00%  opensnitchd-new   [.] runtime.goexit.abi0
+   11,06%     0,00%  opensnitchd-new   [.] syscall.Syscall6
+   10,61%     0,00%  opensnitchd-new   [.] github.com/evilsocket/opensnitch/daemon/procmon/ebpf.monitorAlreadyEstablished
+   10,61%     0,00%  opensnitchd-new   [.] github.com/evilsocket/opensnitch/daemon/netlink.SocketsDump
+   10,61%     0,00%  opensnitchd-new   [.] github.com/vishvananda/netlink/nl.(*NetlinkRequest).Execute

Could you execute perf top -g -p $(pgrep opensnitchd) and monitor for some time while the CPU usage is at ~2% what are the top functions being used? the second column, I think it'll be clear what is causing the CPU usage.

@bughunter2
Copy link

Agreed, we seem to have narrowed the problem down a bit and that's good news!

I've followed your instructions and made a couple of screenshots. It seems it's usually the amount of syscalls that causes the CPU usage, although it's not exactly clear to me why they occur in this amount. The system is nearly idle at these moments, with just a browser tab or two that cause only a couple of requests once per minute. I've looked at the idle moments when taking the screenshots. The opensnitch log still looks the same as when I posted it earlier.

Screenshots:

Screenshot_2024-04-28_11-56-41

... and:

Screenshot_2024-04-28_12-07-30

@gustavo-iniguez-goya
Copy link
Collaborator

yeah, same problem on my system. monitorAlreadyEstablished() is the culprit here.

The established connections collected on that thread are used from findInAlreadyEstablishedTCP(), when we fail to obtain a new outbound connection via eBPF. But as far as I can tell, when this occurs, the outbound connection is never found in the already established conns list, so that's why I was thinking if get rid of it or not (in these cases we fallback to proc, and some connections are found in /proc/net/*).

Anyway, no idea why on your system is using 2-4% of the CPU, while on my systems is usually <1%.

@bughunter2
Copy link

I guess the answer is relatively straightforward in this case. There's simply less CPU time available here, so 100% of the CPU's time represents less cycles over here than on your system. However, I can get my system to behave similar to yours.

This system (a laptop) is configured to have "Intel Turbo Boost" disabled, and it also uses the powersave CPU governor by default. I only enable the "Intel Turbo Boost" option for certain games as I find the system to be plenty fast for daily use. On this system (it has an Intel Core i5-1335U), the powersave governor limits the CPU's frequency to anywhere between 400 MHz and 1300 MHz for the 'performance cores' and 400-900MHz for the 'efficient cores' (it has a hybrid performance CPU architecture).

The new CPU usage measurements:
( I've explicitly added frequency arguments here (-d and -u) to show you the frequencies that are used, though I don't actually set them anywhere explicitly. )

# Turbo disabled. Using powersave CPU governor.
$ echo 1 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo
$ sudo cpupower frequency-set -g powersave -d 400MHz -u 1300MHz
$ top -p $(pgrep opensnitchd)

Result: opensnitchd CPU usage: min 2.3% max 5.6% (usually 2.7% or 3.3%)

Compare with:

# Turbo enabled. Using performance CPU governor.
$ echo 0 | sudo tee /sys/devices/system/cpu/intel_pstate/no_turbo
$ sudo cpupower frequency-set -g performance -d 400MHz -u 4600MHz
$ top -p $(pgrep opensnitchd)

Result: opensnitchd CPU usage: min 0.7% max 2.3% (usually 1.3% or 1.7%)

Mystery solved I guess.

Thank you for debugging this together with me so we could get to the bottom of it!

@gustavo-iniguez-goya
Copy link
Collaborator

Interesting information, thank you @bughunter2! I'll add it to the wiki.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3rd party Error related to a third-party lib/app/...
Projects
None yet
Development

No branches or pull requests

3 participants