Keichi Posted February 11 Share Posted February 11 Hello, On my unraid server, i have this cache pool : It is 3x 1TB drives, with one mirror, for a total of 1.9 Tb useable capacity. On the unraid console, when i do : df -h I have this output : root@Yggdrasil:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 24G 1.6G 22G 7% / tmpfs 32M 2.2M 30M 7% /run /dev/sda1 58G 2.5G 55G 5% /boot overlay 24G 1.6G 22G 7% /lib overlay 24G 1.6G 22G 7% /usr devtmpfs 8.0M 0 8.0M 0% /dev tmpfs 24G 0 24G 0% /dev/shm tmpfs 128M 1.5M 127M 2% /var/log tmpfs 1.0M 0 1.0M 0% /mnt/disks tmpfs 1.0M 0 1.0M 0% /mnt/remotes tmpfs 1.0M 0 1.0M 0% /mnt/addons tmpfs 1.0M 0 1.0M 0% /mnt/rootshare (array disks here) cache 939G 23G 916G 3% /mnt/cache cache/system 933G 17G 916G 2% /mnt/cache/system cache/domains 1.4T 417G 916G 32% /mnt/cache/domains cache/appdata 916G 2.3M 916G 1% /mnt/cache/appdata cache/appdata/mosquitto 916G 256K 916G 1% /mnt/cache/appdata/mosquitto cache/appdata/grafana 916G 1.3M 916G 1% /mnt/cache/appdata/grafana cache/appdata/homeassistant 916G 93M 916G 1% /mnt/cache/appdata/homeassistant cache/appdata/mysql 916G 119M 916G 1% /mnt/cache/appdata/mysql cache/appdata/nginx 916G 1.0M 916G 1% /mnt/cache/appdata/nginx (etc for others containers) The "cache" is reported to be 939GB with only 23Gb used. I dont really understand this value. Is it because of ZFS ? I use Glances (with Homepage), and i can't show the correct size of the cache pool : It seems to be reported as the df command line. Really not a big deal, but maybe i can use this to learn something more. Thanks all, K. Quote Link to comment
JorgeB Posted February 11 Share Posted February 11 Because it's by dataset, look st the datasets and add them all, just domains dataset is using 417G, and note that those use GiB, GUI uses GB. Quote Link to comment
Keichi Posted February 11 Author Share Posted February 11 Hello JorgeB, Indeed, it is by dataset. But why the main one (cache) did not report the full size of the pool array ? It seems to only report one drive of the three. That is more that i did not understand. Maybe i could have done better configuring it ? Thanks, K. Quote Link to comment
Solution JorgeB Posted February 11 Solution Share Posted February 11 Just now, Keichi said: But why the main one (cache) did not report the full size of the pool array ? Because as far as df can see there's no data there, data is inside the datasets, and datasets are separate zfs filesystems, for example if you try to move some data from one dataset to another a full copy will be done, instead of a move. Quote Link to comment
Keichi Posted February 11 Author Share Posted February 11 Ok, i understand. So it is impossible to report the full size of a ZFS cache pool, with the tool i use. Thanks for the help, and as always, you clear answer. K. Quote Link to comment
JorgeB Posted February 11 Share Posted February 11 58 minutes ago, Keichi said: So it is impossible to report the full size of a ZFS cache pool, with the tool i use. BTW, if you need to get the used or free space from a zfs pool for a script or something, instead of df you can use: zfs get -Hp -o value available,used <pool name> Quote Link to comment
Keichi Posted February 11 Author Share Posted February 11 Thanks! But, this is for Homepage (https://github.com/gethomepage/homepage) which uses Glances integration (https://github.com/nicolargo/glances). I will ask directly on Glances' Github for more information. I just wanted to understand better the issue and from you gave me, something is possible : root@Yggdrasil:~# zfs get -Hp -o value available,used /mnt/cache 966035955456 963024068864 Values seem correct to me. Thanks again JorgeB! K. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.