-
Notifications
You must be signed in to change notification settings - Fork 167
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
BTRFS support for monitoring #373
Comments
I'll install a Fedora 35 which comes with BTRFS by default and I'll let you know. |
I've just stumbled onto Monitorix and noticed that there was a ZFS section, so I was looking for the btrfs section ;) I've setup btrfs on an Arch Linux NAS, using a Fedora 36 VM to play with btrfs and find it's foibles Note that there's an issue with Great application btw :) |
Yeah, sorry. I've not got enough time yet to work on this. I installed F35 with BTRFS some weeks ago and I saw that btrfs tools show a lot of information. I hope I can resume this work in the next weeks.
Didn't know that. Anyway,
Thank you very much. Glad to know you are enjoying it. |
Hello! Good to know that BTRFS is coming :) |
No worries, Life happens ;) I read with interest #295 - so, I'd presumed that this would be a new feature and might go in to that list? Just to clarify, yep,
Oh yes :) btrfs has it's own I think the biggest point (for the end-user) to understand is that a volume and sub-volume report the same disk usage of the parent volume (at the moment), so there's no point / no way to see how large a subvolume is. |
Sorry for the late reply.
#295 has a lot of wishes 😃 and I'm not sure if time will permit. I'd like to include the
Yes, once we have I'm still struggling with the meaning of this output:
Anyone can clarify this and which values are important here? |
Here mine the sum of the "total" gives you the sum of the pre-allocated data. Interesting but not enought. I don't know if you are aware of how btrfs works, I assume no, forgive me if it's not the case. |
So in your case. You have certainly a single disque, with at least 2.8 Gb of capacity. 879 Mb are used for storage Mine is a RAID5 storage (N - 1 disk capacity), using 6 Tb. |
sudo btrfs device usage /mount/point Gives more clues ! |
I don't have a screenshot of this, but BTRFS can generate multiple "Data" rows (stripes) that can be added to get the read storage used |
Tell me if you need more infos ! I'm very interested in have this module on next release :) |
Thanks for your information, but still I'm not sure which values should be important to appear on graphs and how they relate each other. I have the following stats in my freshly installed Fedora 36:
I would discard using the command I would prefer to focus on the command So, the question is: do you think that would make sense to put each line in the output of Something like this:
Other interesting information that would be on separate (smaller) graphs is:
|
I think 3 graphs :
Could be cool to get all possible values of the storage type. single, DUP, RAID1, RAID5, RAID6 etc ... I'm not using this feature, but some pleople maybe would enjoy a monitoring per subvolume. (quotas can be set, monitoring usage / quota) |
Since I have only a single disk I cannot test all the possibilities. So, it is possible to use the commands
Where should appear this information?
An example? |
Yes sir, look at all my screenshots ;) For the subvolume feature, It was a proposal for advanced users. But in my case, I don't use it. So I will be unable to help for this. Could be written later on a "btrfs_subvolume.pm" module |
Yes I know, I meant, I wanted to know how these values should be represented in graphs. |
Also, I think that the option list should accept only mountpoints, not devices, since is a common value accepted in the commands Something like this:
Do you agree? |
In a time based graph has no sense yes. Just a list I think, could have multiple "Data" stripes, keep in mind. If you are able to detect device by the mountpoint yes. Multiple BTRFS volumes can be attached to the same system of course. |
Why do I need to detect the device by the mountpoint? |
the mountpoint is mandatory in the command, so it's normal to declare it. |
There is 2 way to see brtfs. |
I don't understand it, sorry. Data usage and errors can be obtained with the commands
Temperatures? health? this is not related to BTRFS, you have this information in another graph. |
The command line require the mountpoint for the command line Let me time to draw it. later today. I think you will understand my point of view with a draw. No no I don't need temperature, health from smart I already have. BRTFS provides me btrfs device stats /mount/point |
That would very useful. So far, I have this:
Still, I don't know if device names are useful.
Thoughts? |
Just catching up with this... So, the device usage graph (main pane above), would show the usage on different physical disks? That would be interesting, but I'm not sure it adds real value. I mean... if I had a large change in data, but the total storage was still "half full"... do I care what is on each disk?? Not really... But do I care if a disk is starting to fail? Oh yes 😉 Sorry I have been away for a while, but I think this was a good start:
But I would tweak that top graph to be something like:
(for example - but my drawing is badly scaled / proportioned 😄 ) That might be what @mikaku is showing above anyway... I might have just misunderstood... |
Your drawing is very similar to mine. In my drawing I tried to isolate each device/filesystem with a set of graphs (3 per device or filesystem). So basically, each device/filesystem has its own error graph. I think this way is scalable to an unlimited number of devices/filesystems. |
Hello,
When you have a BTRFS volume, the first fear is DATA corruption.
The command : sudo btrfs device stats /mount/point provides this :
[/dev/sdb].write_io_errs 0
[/dev/sdb].read_io_errs 0
[/dev/sdb].flush_io_errs 0
[/dev/sdb].corruption_errs 0
[/dev/sdb].generation_errs 0
[/dev/sdc].write_io_errs 0
[/dev/sdc].read_io_errs 0
[/dev/sdc].flush_io_errs 0
[/dev/sdc].corruption_errs 0
[/dev/sdc].generation_errs 0
[/dev/sdd].write_io_errs 0
[/dev/sdd].read_io_errs 0
[/dev/sdd].flush_io_errs 0
[/dev/sdd].corruption_errs 0
[/dev/sdd].generation_errs 0
[/dev/sde].write_io_errs 0
[/dev/sde].read_io_errs 0
[/dev/sde].flush_io_errs 0
[/dev/sde].corruption_errs 0
[/dev/sde].generation_errs 0
Every five value for each disk has to be seen.
(need a list to list all disk device)
The command : sudo btrfs device usage /mount/point provides this :
/dev/sdb, ID: 1
Device size: 5.46TiB
Device slack: 0.00B
Data,RAID1: 3.92TiB
Metadata,RAID1: 5.00GiB
System,RAID1: 32.00MiB
Unallocated: 1.53TiB
/dev/sdc, ID: 3
Device size: 1.82TiB
Device slack: 0.00B
Data,RAID1: 407.00GiB
Metadata,RAID1: 1.00GiB
Unallocated: 1.42TiB
/dev/sdd, ID: 4
Device size: 2.73TiB
Device slack: 0.00B
Data,RAID1: 1.31TiB
Unallocated: 1.42TiB
/dev/sde, ID: 5
Device size: 3.64TiB
Device slack: 0.00B
Data,RAID1: 2.21TiB
Metadata,RAID1: 4.00GiB
System,RAID1: 32.00MiB
Unallocated: 1.42TiB
Ignore ID, it change everytime you change a dead disk.
Device size and Data, Metadata, System ... is interesting to keep an eye on.
need a parameter maybe to say witch redondancy mode is used.
For me it's the main things. Maybe someone will like something else.
The text was updated successfully, but these errors were encountered: