Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Wrong memory usage reported on panel (includes Linux kernel cache) #1696

Closed
xaviergmail opened this issue Sep 3, 2019 · 4 comments · Fixed by pterodactyl/daemon#105
Closed
Labels
bug Something that's not working as it's intended to be.

Comments

@xaviergmail
Copy link

xaviergmail commented Sep 3, 2019

Background

  • Panel or Daemon: Uncertain
  • Version of Panel/Daemon: 0.7.15 / 0.6.12
  • Server's OS: Debian Stretch on both panel and daemon servers
  • Your Computer's OS & Browser: Windows 10, Chrome

Describe the bug

The panel currently shows 11GB of RAM usage, while in reality the server process only consumes 1.4GB

wrong stats
graph

While docker stats shows

CONTAINER ID  NAME         CPU %    MEM USAGE / LIMIT      MEM %    NET I/O          BLOCK I/O           PIDS
<redacted>    <redacted>   3.01%    1.467GiB / 19.47GiB    7.53%    542MB / 1.74GB   7.87GB / 1.52MB     28
<redacted>    <redacted>   2.94%    651.8MiB / 8.041GiB    7.91%    240MB / 602MB    7.89GB / 1.47MB     26

And free -h shows

              total        used        free      shared  buff/cache   available
Mem:            62G        3.5G         28G        258M         30G         58G
Swap:           58G        1.0M         58G

Linux 4.19.0-5-amd64 #1 SMP Debian 4.19.37-5+deb10u2 (2019-08-08) x86_64 GNU/Linux

$ sudo docker info
Client:
 Debug Mode: false

Server:
 Containers: 5
  Running: 4
  Paused: 0
  Stopped: 1
 Images: 7
 Server Version: 19.03.1
 Storage Driver: btrfs
  Build Version: Btrfs v4.7.3
  Library Version: 101
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
 runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
 init version: fec3683
 Security Options:
  apparmor
  seccomp
   Profile: default
 Kernel Version: 4.19.0-5-amd64
 Operating System: Debian GNU/Linux 10 (buster)
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 62.79GiB
 Name: <redacted>
 ID: <redacted>
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false

WARNING: No swap limit support

To Reproduce

  1. Start any game server that reads a lot of game assets from disk (in my case it is both read and write heavy).
  2. The RAM usage goes up during this phase on the panel (Linux page cache), but never goes down.
  3. You may run echo 1 > /proc/sys/vm/drop_caches from the docker host
  4. Notice a dramatic decrease of RAM usage on the live graph. In my case, I went from 11.0GB to 1.1GB usage.

Expected behavior
Panel to show actual process RAM usage and not RAM used for caching purposes by the Linux kernel.

@xaviergmail xaviergmail changed the title Wrong memory usage reported on panel Wrong memory/CPU usage reported on panel Sep 3, 2019
@StealWonders
Copy link

Duplicate of #1356 maybe

@xaviergmail xaviergmail changed the title Wrong memory/CPU usage reported on panel Wrong memory usage reported on panel (includes Linux kernel cache) Sep 5, 2019
@xaviergmail
Copy link
Author

I've gone ahead and edited the issue with my new findings.

@DaneEveritt
Copy link
Member

This is likely being caused by incorrect math in the old daemon, the new one uses the same logic Docker uses to calculate usage:

https://github.com/pterodactyl/wings/blob/develop/server/resources.go

@lancepioch lancepioch added the bug Something that's not working as it's intended to be. label Dec 15, 2019
@DaneEveritt
Copy link
Member

Closing, fixed in wings.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something that's not working as it's intended to be.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants