Hi guys
since i am running a RC, i believe it is better to ask for help here. I have a case where my dockers started crashing /misbehaving. i also noted that some data was written to the array even though the shared is set to use cache as preferred.
giving the system a reboot last night , i thought that would resolve the issue. no luck
this morning i could focus a bit more on the issue, where i found in the syslog that it reports "shfs: cache disk full"
this is odd, since the GUI shows that there is more than enough space (i have mover scheduled to run on a daily basis to clear out all data that does not need to be on the cache). I have however had a few high usage days, where i think the cache was pushed beyond its capacity, which might be related (Cache is however set for minimum free space of 20GB, which i believe was sufficient for the Docker appdata)
GUI shows the following:
Pool is configured to be Raid-0, but the details given via the cache settings on GUI does not seem correct (Data should be 480GB)
Data, RAID0: total=130.00GiB, used=128.97GiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=5.00GiB, used=3.63GiB
GlobalReserve, single: total=282.20MiB, used=0.00B
What i have tried :
rebalance (a few times -with dockers stopped)
Trim, then rebalance again
so far nothing seems to reset the total capacity to the correct value.
does anybody have an idea as to what could be causing this?
i am busy attempting to move appdata to the array, but plex metadata (2mil plus files) is slowing the proces down)
diagnostics attached
DF -H output :
root@Storage:/boot# df -H
Filesystem Size Used Avail Use% Mounted on
rootfs 17G 917M 16G 6% /
tmpfs 34M 934k 33M 3% /run
devtmpfs 17G 0 17G 0% /dev
tmpfs 17G 0 17G 0% /dev/shm
cgroup_root 8.4M 0 8.4M 0% /sys/fs/cgroup
tmpfs 135M 3.4M 131M 3% /var/log
/dev/sda1 16G 621M 15G 4% /boot
/dev/loop0 9.4M 9.4M 0 100% /lib/modules
/dev/loop1 5.6M 5.6M 0 100% /lib/firmware
/dev/md1 4.0T 4.0T 86G 98% /mnt/disk1
/dev/md2 4.0T 3.9T 155G 97% /mnt/disk2
/dev/md3 4.0T 4.0T 28G 100% /mnt/disk3
/dev/md4 4.0T 3.9T 113G 98% /mnt/disk4
/dev/md5 4.0T 4.0T 84G 98% /mnt/disk5
/dev/md6 3.0T 3.0T 76G 98% /mnt/disk6
/dev/md7 3.0T 3.0T 74G 98% /mnt/disk7
/dev/md8 3.0T 3.0T 90G 98% /mnt/disk8
/dev/md9 3.0T 3.0T 79G 98% /mnt/disk9
/dev/md10 3.0T 3.0T 45G 99% /mnt/disk10
/dev/md11 4.0T 4.0T 78G 99% /mnt/disk11
/dev/md12 3.0T 2.9T 129G 96% /mnt/disk12
/dev/md13 3.0T 2.8T 204G 94% /mnt/disk13
/dev/md15 4.0T 3.9T 138G 97% /mnt/disk15
/dev/md16 4.0T 3.8T 218G 95% /mnt/disk16
/dev/md17 4.0T 3.8T 222G 95% /mnt/disk17
/dev/md18 3.0T 2.9T 125G 96% /mnt/disk18
/dev/md19 2.0T 1.9T 184G 91% /mnt/disk19
/dev/md20 8.0T 7.6T 451G 95% /mnt/disk20
/dev/md21 8.0T 7.9T 190G 98% /mnt/disk21
/dev/md22 4.0T 3.8T 207G 95% /mnt/disk22
/dev/sdt1 481G 147G 331G 31% /mnt/cache
shfs 82T 80T 3.0T 97% /mnt/user0
shfs 83T 80T 3.3T 97% /mnt/user
google: 1.2P 6.0T 1.2P 1% /mnt/disks/google
root@Storage:/boot#
other commands :
root@Storage:/boot# btrfs fi df /mnt/cache
Data, RAID0: total=130.00GiB, used=128.97GiB
System, RAID1: total=32.00MiB, used=16.00KiB
Metadata, RAID1: total=5.00GiB, used=3.63GiB
GlobalReserve, single: total=282.20MiB, used=0.00B
root@Storage:/boot# btrfs fi show /mnt/cache
Label: none uuid: a38f3698-5c6d-43b0-aa5d-1aaee1e81822
Total devices 2 FS bytes used 132.61GiB
devid 1 size 223.57GiB used 70.03GiB path /dev/sdt1
devid 2 size 223.57GiB used 70.03GiB path /dev/sdo1
root@Storage:/boot#
Thank you guys
much appreciated!
Recommended Comments
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.