Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Storage reported as almost 100% #1049

Open
melsophos opened this issue Mar 19, 2024 · 1 comment
Open

[Bug] Storage reported as almost 100% #1049

melsophos opened this issue Mar 19, 2024 · 1 comment

Comments

@melsophos
Copy link

Description of the bug

I have three disks connected to my server (NUC8i3BEH): one for the system(/dev/sda, M.2 ATA SSD) and two for the data (2.5" Samsung SSD and external 3.5" Western Digital HDD). The system disks appear as full, whereas df shows only 4% usage (even when executed inside the docker container). However, information for both other disks is correct.

This sounds similar to #1032.

image

image

How to reproduce

No response

Relevant log output

Running `curl http://localhost:3001/info | jq`

{
  "os": {
    "arch": "x64",
    "distro": "Ubuntu",
    "kernel": "6.2.0-39-generic",
    "platform": "linux",
    "release": "23.04",
    "uptime": 3619225.61,
    "dash_version": "5.8.3",
    "dash_buildhash": "f7ac2728b89a6c75502c9c736c46a94ff386889b"
  },
  "cpu": {
    "brand": "Intel",
    "model": "Core™ i3-8109U",
    "cores": 2,
    "ecores": 0,
    "pcores": 2,
    "threads": 4,
    "frequency": 3.6
  },
  "ram": {
    "size": 8178470912,
    "layout": [
      {
        "brand": "Crucial",
        "type": "DDR4",
        "frequency": 2667
      },
      {
        "brand": "Crucial",
        "type": "DDR4",
        "frequency": 2667
      }
    ]
  },
  "storage": [
    {
      "size": 512110190592,
      "disks": [
        {
          "device": "sda",
          "brand": "ATA",
          "type": "SSD"
        }
      ]
    },
    {
      "size": 4000787030016,
      "disks": [
        {
          "device": "sdb",
          "brand": "Samsung",
          "type": "SSD"
        }
      ]
    },
    {
      "size": 3000592982016,
      "disks": [
        {
          "device": "sdc",
          "brand": "External",
          "type": "HD"
        }
      ]
    }
  ],
  "network": {
    "interfaceSpeed": 1000,
    "speedDown": 0,
    "speedUp": 0,
    "lastSpeedTest": 0,
    "type": "Wired",
    "publicIp": ""
  },
  "gpu": {
    "layout": []
  }
}


# curl http://localhost:3001/load/storage
[510765391872,2986302857216,2582610644992]


### Info output of dashdot cli

```shell
INFO
=========
Yarn: 3.7.0
Node: v20.11.0
Dash: 5.8.3

Cwd: /app
Hash: f7ac2728b89a6c75502c9c736c46a94ff386889b
Platform: Linux 63117f0a668e 6.2.0-39-generic #40-Ubuntu SMP PREEMPT_DYNAMIC Tue Nov 14 14:18:00 UTC 2023 x86_64 Linux
Docker image: base
In Docker: true
In Docker (env): true
In Podman: false


### What browsers are you seeing the problem on?

Firefox

### Where is your instance running?

Linux Server

### Additional context

_No response_
@ithinkmax
Copy link

ithinkmax commented May 8, 2024

I have the same error, dash run on docker on a synology NAS with 80% free space and it show 99,5%used...

this is the docker DF output from portainer:

/app # df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/cachedev_0
74981076176 15275270516 59705805660 20% /
tmpfs 65536 0 65536 0% /dev
tmpfs 16320904 0 16320904 0% /sys/fs/cgroup
shm 65536 0 65536 0% /dev/shm
/dev/md0 2385528 1634316 632428 72% /mnt/host
tmpfs 16320904 0 16320904 0% /mnt/host/sys/fs/cgroup
devtmpfs 16283540 0 16283540 0% /mnt/host/proc/bus/usb
devtmpfs 16283540 0 16283540 0% /mnt/host/dev
tmpfs 16320904 244 16320660 0% /mnt/host/dev/shm
tmpfs 1073741824 0 1073741824 0% /mnt/host/dev/virtualization
tmpfs 16320904 44324 16276580 0% /mnt/host/run
tmpfs 16320904 2908 16317996 0% /mnt/host/tmp
/dev/mapper/cachedev_0
74981076176 15275270516 59705805660 20% /mnt/host/volume1
/dev/mapper/cachedev_0
74981076176 15275270516 59705805660 20% /mnt/host/volume1/@docker
/dev/mapper/cachedev_0
74981076176 15275270516 59705805660 20% /mnt/host/volume1/@docker/btrfs
/dev/mapper/cachedev_0
74981076176 15275270516 59705805660 20% /mnt/host/volume1/@docker/btrfs/subvolumes/6c2430a37f5792df426ae39dd5f319ac4005760a66af8c58c204c949d9044c18
tmpfs 65536 0 65536 0% /mnt/host/volume1/@docker/btrfs/subvolumes/6c2430a37f5792df426ae39dd5f319ac4005760a66af8c58c204c949d9044c18/dev
shm 65536 0 65536 0% /mnt/host/volume1/@docker/btrfs/subvolumes/6c2430a37f5792df426ae39dd5f319ac4005760a66af8c58c204c949d9044c18/dev/shm
tmpfs 16320904 0 16320904 0% /mnt/host/volume1/@docker/btrfs/subvolumes/6c2430a37f5792df426ae39dd5f319ac4005760a66af8c58c204c949d9044c18/sys/fs/cgroup
none 524288000 114259876 410028124 22% /mnt/host/volume1/ALi-Commerciale
none 524288000 81498300 442789700 16% /mnt/host/volume1/ALi-Amministrazione
none 2621440000 1575776384 1045663616 60% /mnt/host/volume1/ALi-Produzione
none 786432000 408708 786023292 0% /mnt/host/volume1/PagaRent
/volume1/@ali-admin@ 74981076176 15275270516 59705805660 20% /mnt/host/volume1/ALi-Admin
/dev/mapper/cachedev_0
74981076176 15275270516 59705805660 20% /etc/resolv.conf
/dev/mapper/cachedev_0
74981076176 15275270516 59705805660 20% /etc/hostname
/dev/mapper/cachedev_0
74981076176 15275270516 59705805660 20% /etc/hosts

HwQsWZ3aoomxgswOYCrUeTZd8oY1tzdhKPz6WO0O

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants